1 code implementation • 11 Oct 2023 • Shreyas Havaldar, Jatin Chauhan, Karthikeyan Shanmugam, Jay Nandy, Aravindan Raghuveer
Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift.
1 code implementation • 25 Jun 2022 • Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran
Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.
no code implementations • 25 Jul 2021 • Jay Nandy, Wynne Hsu, Mong Li Lee
Deep learning-based models are developed to automatically detect if a retina image is `referable' in diabetic retinopathy (DR) screening.
no code implementations • 9 Feb 2021 • Jay Nandy, Sudipan Saha, Wynne Hsu, Mong Li Lee, Xiao Xiang Zhu
In this paper, we propose a novel method, called \emph{Certification through Adaptation}, that transforms an AT model into a randomized smoothing classifier during inference to provide certified robustness for $\ell_2$ norm without affecting their empirical robustness against adversarial attacks.
1 code implementation • NeurIPS 2020 • Jay Nandy, Wynne Hsu, Mong Li Lee
Among existing uncertainty estimation approaches, Dirichlet Prior Network (DPN) distinctly models different predictive uncertainty types.
2 code implementations • 5 Apr 2020 • Jay Nandy, Wynne Hsu, Mong Li Lee
Using adversarial training to defend against multiple types of perturbation requires expensive adversarial examples from different perturbation types at each training step.
no code implementations • 25 Sep 2019 • Jay Nandy
Predictive uncertainties can originate from the uncertainty in model parameters, data uncertainty or due to distributional mismatch between training and test examples.
no code implementations • 14 May 2018 • Jay Nandy, Wynne Hsu, Mong Li Lee
Gaussian distributions are commonly used as a key building block in many generative models.