no code implementations • 13 Jun 2024 • Athmanarayanan Lakshmi Narayanan, Ranganath Krishnan, Amrutha Machireddy, Mahesh Subedar
Foundational vision transformer models have shown impressive few shot performance on many vision tasks.
no code implementations • 13 Sep 2021 • Ranganath Krishnan, Alok Sinha, Nilesh Ahuja, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness.
no code implementations • 13 Sep 2021 • Ranganath Krishnan, Nilesh Ahuja, Alok Sinha, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
We introduce supervised contrastive active learning (SCAL) and propose efficient query strategies in active learning based on the feature similarity (featuresim) and principal component analysis based feature-reconstruction error (fre) to select informative data samples with diverse feature representations.
no code implementations • 10 Sep 2021 • Shashank Bujimalla, Mahesh Subedar, Omesh Tickoo
PS-NOC is agnostic to model architecture, and primarily focuses on the training approach that uses existing fully paired image-caption data and the images with only the novel object detection labels (partially paired data).
no code implementations • 10 Jun 2021 • Shashank Bujimalla, Mahesh Subedar, Omesh Tickoo
In this paper, we study the impact of motion blur, a common quality flaw in real world images, on a state-of-the-art two-stage image captioning solution, and notice a degradation in solution performance as blur intensity increases.
no code implementations • 6 Apr 2020 • Shashank Bujimalla, Mahesh Subedar, Omesh Tickoo
The "baseline" for the policy-gradients in B-SCST is generated by averaging predictive quality metrics (CIDEr-D) of the captions drawn from the distribution obtained using a Bayesian DNN model.
no code implementations • 3 Dec 2019 • Mahesh Subedar, Nilesh Ahuja, Ranganath Krishnan, Ibrahima J. Ndiour, Omesh Tickoo
In the second approach, we use Bayesian deep neural networks trained with mean-field variational inference to estimate model uncertainty associated with the predictions.
no code implementations • ICCV 2019 • Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, Jonathan Huang
In the multimodal setting, the proposed framework improved precision-recall AUC by 10. 2% on the subset of MiT dataset as compared to non-Bayesian baseline.
2 code implementations • 12 Jun 2019 • Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo
We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks.
no code implementations • 27 Nov 2018 • Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, Jonathan Huang
In the multimodal setting, the proposed framework improved precision-recall AUC by 10. 2% on the subset of MiT dataset as compared to non-Bayesian baseline.
no code implementations • 8 Nov 2018 • Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo
We show that the Bayesian inference applied to DNNs provide reliable confidence measures for visual activity recognition task as compared to conventional DNNs.