no code implementations • 3 Dec 2024 • Ranganath Krishnan, Piyush Khanna, Omesh Tickoo
Through rigorous evaluation on multiple free-form question-answering datasets and models, we demonstrate that our uncertainty-aware fine-tuning approach yields better calibrated uncertainty estimates in natural language generation tasks than fine-tuning with the standard causal language modeling loss.
no code implementations • 13 Jun 2024 • Athmanarayanan Lakshmi Narayanan, Ranganath Krishnan, Amrutha Machireddy, Mahesh Subedar
Foundational vision transformer models have shown impressive few shot performance on many vision tasks.
no code implementations • 17 Feb 2024 • Yang Ni, Zhuowen Zou, Wenjun Huang, Hanning Chen, William Youngwoo Chung, Samuel Cho, Ranganath Krishnan, Pietro Mercati, Mohsen Imani
Drawing inspiration from the outstanding learning capability of our human brains, Hyperdimensional Computing (HDC) emerges as a novel computing paradigm, and it leverages high-dimensional vector presentation and operations for brain-like lightweight Machine Learning (ML).
no code implementations • 9 Dec 2022 • Neslihan Kose, Ranganath Krishnan, Akash Dhamasia, Omesh Tickoo, Michael Paulitsch
Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making.
no code implementations • 13 Sep 2021 • Ranganath Krishnan, Nilesh Ahuja, Alok Sinha, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
We introduce supervised contrastive active learning (SCAL) and propose efficient query strategies in active learning based on the feature similarity (featuresim) and principal component analysis based feature-reconstruction error (fre) to select informative data samples with diverse feature representations.
no code implementations • 13 Sep 2021 • Ranganath Krishnan, Alok Sinha, Nilesh Ahuja, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness.
1 code implementation • NeurIPS 2020 • Ranganath Krishnan, Omesh Tickoo
Obtaining reliable and accurate quantification of uncertainty estimates from deep neural networks is important in safety-critical applications.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
no code implementations • 3 Dec 2019 • Mahesh Subedar, Nilesh Ahuja, Ranganath Krishnan, Ibrahima J. Ndiour, Omesh Tickoo
In the second approach, we use Bayesian deep neural networks trained with mean-field variational inference to estimate model uncertainty associated with the predictions.
2 code implementations • 12 Jun 2019 • Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo
We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks.
no code implementations • 27 Nov 2018 • Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, Jonathan Huang
In the multimodal setting, the proposed framework improved precision-recall AUC by 10. 2% on the subset of MiT dataset as compared to non-Bayesian baseline.
no code implementations • 8 Nov 2018 • Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo
We show that the Bayesian inference applied to DNNs provide reliable confidence measures for visual activity recognition task as compared to conventional DNNs.