1 code implementation • 20 Jul 2023 • Zoe Fowler, Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
There exists two types of clinical trials: retrospective and prospective.
no code implementations • 24 May 2023 • Kiran Kokilepersaud, Stephanie Trejo Corona, Mohit Prabhushankar, Ghassan AlRegib, Charles Wykoff
We exploit this relationship by using the clinical data as pseudo-labels for our data without biomarker labels in order to choose positive and negative instances for training a backbone network with a supervised contrastive loss.
no code implementations • 28 Apr 2023 • Kiran Kokilepersaud, Mohit Prabhushankar, Yavuz Yarici, Ghassan AlRegib, Armin Parchami
In this work, we present a methodology to shape a fisheye-specific representation space that reflects the interaction between distortion and semantic context present in this data modality.
no code implementations • 6 Apr 2023 • Jinsol Lee, Charlie Lehman, Mohit Prabhushankar, Ghassan AlRegib
We define purview as the additional capacity necessary to characterize inference samples that differ from the training data.
1 code implementation • 16 Feb 2023 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib, Armin Pacharmi, Enrique Corona
To alleviate this issue, we propose a grounded second-order definition of information content and sample importance within the context of active learning.
1 code implementation • 11 Feb 2023 • Mohit Prabhushankar, Ghassan AlRegib
This paper conjectures and validates a framework that allows for action during inference in supervised neural networks.
no code implementations • 12 Jan 2023 • Ryan Benkert, Mohit Prabhushankar, Ghassan AlRegib
However, existing strategies directly base the data selection on the data representation of the unlabeled data which is random for OOD samples by definition.
1 code implementation • 10 Nov 2022 • Chen Zhou, Mohit Prabhushankar, Ghassan AlRegib
Our evaluation of existing uncertainty estimation algorithms, with the presence of HLU, indicates the limitations of existing uncertainty metrics and algorithms themselves in response to HLU.
no code implementations • 9 Nov 2022 • Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
This is accomplished by leveraging the larger amount of clinical data as pseudo-labels for our data without biomarker labels in order to choose positive and negative instances for training a backbone network with a supervised contrastive loss.
1 code implementation • 22 Sep 2022 • Mohit Prabhushankar, Kiran Kokilepersaud, Yash-yee Logan, Stephanie Trejo Corona, Ghassan AlRegib, Charles Wykoff
The dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans, and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR or DME.
1 code implementation • 17 Sep 2022 • Mohit Prabhushankar, Ghassan AlRegib
Finally, we ground the proposed machine introspection to human introspection for the application of image quality assessment.
no code implementations • 21 Jun 2022 • Yash-yee Logan, Mohit Prabhushankar, Ghassan AlRegib
Hence, active learning techniques that are developed for natural images are insufficient for handling medical data.
no code implementations • 16 Jun 2022 • Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib
In seismic interpretation, pixel-level labels of various rock structures can be time-consuming and expensive to obtain.
no code implementations • 16 Jun 2022 • Jinsol Lee, Mohit Prabhushankar, Ghassan AlRegib
We propose to utilize gradients for detecting adversarial and out-of-distribution samples.
2 code implementations • 24 Feb 2022 • Ghassan AlRegib, Mohit Prabhushankar
With $P$ as the prediction from a neural network, these questions are `Why P?
no code implementations • 23 Mar 2021 • Mohit Prabhushankar, Ghassan AlRegib
In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast.
no code implementations • 23 Mar 2021 • Mohit Prabhushankar, Ghassan AlRegib
Neural networks trained to classify images do so by identifying features that allow them to distinguish between classes.
no code implementations • 13 Aug 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
To articulate the significance of the model perspective in novelty detection, we utilize backpropagated gradients.
no code implementations • 4 Aug 2020 • Yutong Sun, Mohit Prabhushankar, Ghassan AlRegib
In this paper, we show that existing recognition and localization deep architectures, that have not been exposed to eye tracking data or any saliency datasets, are capable of predicting the human visual saliency.
3 code implementations • 1 Aug 2020 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
Current modes of visual explanations answer questions of the form $`Why \text{ } P?'$.
2 code implementations • ECCV 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
Anomalies require more drastic model updates to fully represent them compared to normal data.
no code implementations • ICLR 2020 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.
no code implementations • 25 Sep 2019 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
Such a positioning scheme is based on a data point’s second-order property.
2 code implementations • 27 Aug 2019 • Gukyeong Kwon, Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms.
no code implementations • 17 Feb 2019 • Mohit Prabhushankar, Gukyeong Kwon, Dogancan Temel, Ghassan AlRegib
In this paper, we generate and control semantically interpretable filters that are directly learned from natural images in an unsupervised fashion.
2 code implementations • 21 Nov 2018 • Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
We use multiple linear decoders to capture different abstraction levels of the image patches.
no code implementations • 21 Nov 2018 • Mohit Prabhushankar, Dogancan Temel, Ghassan AlRegib
While assessing image quality, the filters need to capture perceptual differences based on dissimilarities between a reference image and its distorted version.
1 code implementation • 7 Dec 2017 • Dogancan Temel, Gukyeong Kwon, Mohit Prabhushankar, Ghassan AlRegib
We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions.