1 code implementation • 16 Nov 2024 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
In this paper, we discuss improving the performance of active learning algorithms both in terms of prediction accuracy and negative flips.
1 code implementation • 30 Oct 2024 • Kiran Kokilepersaud, Seulgi Kim, Mohit Prabhushankar, Ghassan AlRegib
Ideally, SSL algorithms would take advantage of this hierarchical emergence to have an additional regularization term to account for this local dimensional collapse effect.
1 code implementation • 29 Oct 2024 • Jorge Quesada, Zoe Fowler, Mohammad Alotaibi, Mohit Prabhushankar, Ghassan AlRegib
Additionally, we demonstrate that performance when using automated methods can be improved by up to 68% via a finetuning approach.
no code implementations • 20 Aug 2024 • Mohit Prabhushankar, Kiran Kokilepersaud, Jorge Quesada, Yavuz Yarici, Chen Zhou, Mohammad Alotaibi, Ghassan AlRegib, Ahmad Mustafa, Yusufjon Kumakov
However, specialized applications that require expert labels lag in data availability.
no code implementations • 20 Aug 2024 • Ghassan AlRegib, Mohit Prabhushankar, Kiran Kokilepersaud, Prithwijit Chowdhury, Zoe Fowler, Stephanie Trejo Corona, Lucas Thomaz, Angshul Majumdar
Balancing personalization and generalization is an important challenge to tackle, as the variation within OCT scans of patients between visits can be minimal while the difference in manifestation of the same disease across different patients may be substantial.
no code implementations • 12 Jun 2024 • Prithwijit Chowdhury, Mohit Prabhushankar, Ghassan AlRegib, Mohamed Deriche
Explainable AI (XAI) has revolutionized the field of deep learning by empowering users to have more trust in neural network models.
1 code implementation • 12 Jun 2024 • Efe Ozturk, Mohit Prabhushankar, Ghassan AlRegib
In this study, we introduce an intelligent Test Time Augmentation (TTA) algorithm designed to enhance the robustness and accuracy of image classification models against viewpoint variations.
1 code implementation • 11 Jun 2024 • Yavuz Yarici, Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
In scenarios, where labels are absent, these importance maps provide more intuitive explanations as they are integral to the human visual system.
no code implementations • 10 Jun 2024 • Kiran Kokilepersaud, Yavuz Yarici, Mohit Prabhushankar, Ghassan AlRegib
In reality, the class label is only one level of a \emph{hierarchy of different semantic relationships known as a taxonomy}.
1 code implementation • 1 Jun 2024 • Mohit Prabhushankar, Ghassan AlRegib
We show that every image, network, prediction, and explanatory technique has a unique uncertainty.
no code implementations • 1 Jun 2024 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
By combining this approach with active learning, a well-known machine learning paradigm for data selection, we arrive at a comprehensive and innovative framework for training set selection in seismic interpretation.
no code implementations • 25 May 2024 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
We refer to the underlying preservation mechanism as transitional feature preservation.
2 code implementations • 22 May 2024 • Mohit Prabhushankar, Ghassan AlRegib
We observe the following: (i) simple methodologies like negative log likelihood and margin classifiers outperform state-of-the-art uncertainty and out-of-distribution detection techniques for misprediction rates, and (ii) the proposed GradTrust is in the Top-2 performing methods on $37$ of the considered $38$ experimental modalities.
no code implementations • 15 Mar 2024 • Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib
Annotator label uncertainty manifests in variations of labeling quality.
no code implementations • 11 Dec 2023 • Johannes Schneider, Mohit Prabhushankar
The learning dynamics of deep neural networks are not well understood.
1 code implementation • 17 Nov 2023 • Kiran Kokilepersaud, Yash-yee Logan, Ryan Benkert, Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib, Enrique Corona, Kunjan Singh, Mostafa Parchami
In this paper, we introduce the FOCAL (Ford-OLIVES Collaboration on Active Learning) dataset which enables the study of the impact of annotation-cost within a video active learning setting.
1 code implementation • 20 Jul 2023 • Zoe Fowler, Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
There exists two types of clinical trials: retrospective and prospective.
no code implementations • 24 May 2023 • Kiran Kokilepersaud, Stephanie Trejo Corona, Mohit Prabhushankar, Ghassan AlRegib, Charles Wykoff
We exploit this relationship by using the clinical data as pseudo-labels for our data without biomarker labels in order to choose positive and negative instances for training a backbone network with a supervised contrastive loss.
no code implementations • 28 Apr 2023 • Kiran Kokilepersaud, Mohit Prabhushankar, Yavuz Yarici, Ghassan AlRegib, Armin Parchami
In this work, we present a methodology to shape a fisheye-specific representation space that reflects the interaction between distortion and semantic context present in this data modality.
no code implementations • 6 Apr 2023 • Jinsol Lee, Charlie Lehman, Mohit Prabhushankar, Ghassan AlRegib
We define purview as the additional capacity necessary to characterize inference samples that differ from the training data.
2 code implementations • 16 Feb 2023 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib, Armin Pacharmi, Enrique Corona
To alleviate this issue, we propose a grounded second-order definition of information content and sample importance within the context of active learning.
1 code implementation • 11 Feb 2023 • Mohit Prabhushankar, Ghassan AlRegib
This paper conjectures and validates a framework that allows for action during inference in supervised neural networks.
no code implementations • 12 Jan 2023 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
However, existing strategies directly base the data selection on the data representation of the unlabeled data which is random for OOD samples by definition.
1 code implementation • 10 Nov 2022 • Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib
Our evaluation of existing uncertainty estimation algorithms, with the presence of HLU, indicates the limitations of existing uncertainty metrics and algorithms themselves in response to HLU.
no code implementations • 9 Nov 2022 • Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
This is accomplished by leveraging the larger amount of clinical data as pseudo-labels for our data without biomarker labels in order to choose positive and negative instances for training a backbone network with a supervised contrastive loss.
1 code implementation • 22 Sep 2022 • Mohit Prabhushankar, Kiran Kokilepersaud, Yash-yee Logan, Stephanie Trejo Corona, Ghassan AlRegib, Charles Wykoff
The dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans, and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR or DME.
1 code implementation • 17 Sep 2022 • Mohit Prabhushankar, Ghassan AlRegib
Finally, we ground the proposed machine introspection to human introspection for the application of image quality assessment.
no code implementations • 21 Jun 2022 • Yash-yee Logan, Mohit Prabhushankar, Ghassan AlRegib
Hence, active learning techniques that are developed for natural images are insufficient for handling medical data.
no code implementations • 16 Jun 2022 • Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
In seismic interpretation, pixel-level labels of various rock structures can be time-consuming and expensive to obtain.
no code implementations • 16 Jun 2022 • Jinsol Lee, Mohit Prabhushankar, Ghassan AlRegib
We propose to utilize gradients for detecting adversarial and out-of-distribution samples.
2 code implementations • 24 Feb 2022 • Ghassan AlRegib, Mohit Prabhushankar
With $P$ as the prediction from a neural network, these questions are `Why P?
no code implementations • 23 Mar 2021 • Mohit Prabhushankar, Ghassan AlRegib
In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast.
no code implementations • 23 Mar 2021 • Mohit Prabhushankar, Ghassan AlRegib
Neural networks trained to classify images do so by identifying features that allow them to distinguish between classes.
no code implementations • 13 Aug 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
To articulate the significance of the model perspective in novelty detection, we utilize backpropagated gradients.
no code implementations • 4 Aug 2020 • Yutong Sun, Mohit Prabhushankar, Ghassan AlRegib
In this paper, we show that existing recognition and localization deep architectures, that have not been exposed to eye tracking data or any saliency datasets, are capable of predicting the human visual saliency.
3 code implementations • 1 Aug 2020 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
Current modes of visual explanations answer questions of the form $`Why \text{ } P?'$.
2 code implementations • ECCV 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
Anomalies require more drastic model updates to fully represent them compared to normal data.
no code implementations • ICLR 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.
no code implementations • 25 Sep 2019 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
Such a positioning scheme is based on a data point’s second-order property.
2 code implementations • 27 Aug 2019 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms.
no code implementations • 17 Feb 2019 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
In this paper, we generate and control semantically interpretable filters that are directly learned from natural images in an unsupervised fashion.
no code implementations • 21 Nov 2018 • Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
While assessing image quality, the filters need to capture perceptual differences based on dissimilarities between a reference image and its distorted version.
2 code implementations • 21 Nov 2018 • Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
We use multiple linear decoders to capture different abstraction levels of the image patches.
1 code implementation • 7 Dec 2017 • Dogancan Temel, Gukyeong Kwon, Mohit Prabhushankar, Ghassan AlRegib
We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions.