no code implementations • 22 Feb 2024 • Abhieet Parida, Daniel Capellan-Martin, Sara Atito, Muhammad Awais, Maria J. Ledesma-Carbayo, Marius G. Linguraru, Syed Muhammad Anwar
In this context, we introduce Diverse Concept Modeling (DiCoM), a novel self-supervised training paradigm that leverages a student teacher framework for learning diverse concepts and hence effective representation of the CXR data.
no code implementations • 22 Feb 2024 • Daniel Capellán-Martín, Abhijeet Parida, Juan J. Gómez-Valverde, Ramon Sanchez-Jacob, Pooneh Roshanitabrizi, Marius G. Linguraru, María J. Ledesma-Carbayo, Syed M. Anwar
We demonstrate improvements in TB detection performance ($\sim$12. 7% and $\sim$13. 4% top AUC/AUPR gains in adults and children, respectively) when conducting self-supervised pre-training when compared to fully-supervised (i. e., non pre-trained) ViT models, achieving top performances of 0. 959 AUC and 0. 962 AUPR in adult TB detection, and 0. 697 AUC and 0. 607 AUPR in zero-shot pediatric TB detection.
no code implementations • 6 Feb 2024 • Abhijeet Parida, Zhifan Jiang, Roger J. Packer, Robert A. Avery, Syed M. Anwar, Marius G. Linguraru
However, benchmarking the effectiveness of harmonization techniques has been a challenge due to the lack of widely available standardized datasets with ground truths.
no code implementations • 5 Aug 2015 • Awais Mansoor, Juan J. Cerrolaza, Robert A. Avery, Marius G. Linguraru
In this work, we propose a partitioned joint statistical shape model approach with sparse appearance learning for the segmentation of healthy and pathological AVP.