Search Results for author: Jay Nandy

Found 8 papers, 4 papers with code

Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff with few Unlabeled Test Samples

1 code implementation11 Oct 2023 Shreyas Havaldar, Jatin Chauhan, Karthikeyan Shanmugam, Jay Nandy, Aravindan Raghuveer

Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift.

Fairness Out-of-Distribution Generalization

Multi-Variate Time Series Forecasting on Variable Subsets

1 code implementation25 Jun 2022 Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran

Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.

Multivariate Time Series Forecasting Time Series

Distributional Shifts in Automated Diabetic Retinopathy Screening

no code implementations25 Jul 2021 Jay Nandy, Wynne Hsu, Mong Li Lee

Deep learning-based models are developed to automatically detect if a retina image is `referable' in diabetic retinopathy (DR) screening.

Classification

Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples

no code implementations9 Feb 2021 Jay Nandy, Sudipan Saha, Wynne Hsu, Mong Li Lee, Xiao Xiang Zhu

In this paper, we propose a novel method, called \emph{Certification through Adaptation}, that transforms an AT model into a randomized smoothing classifier during inference to provide certified robustness for $\ell_2$ norm without affecting their empirical robustness against adversarial attacks.

Adversarial Robustness

Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples

1 code implementation NeurIPS 2020 Jay Nandy, Wynne Hsu, Mong Li Lee

Among existing uncertainty estimation approaches, Dirichlet Prior Network (DPN) distinctly models different predictive uncertainty types.

Out of Distribution (OOD) Detection

Approximate Manifold Defense Against Multiple Adversarial Perturbations

2 code implementations5 Apr 2020 Jay Nandy, Wynne Hsu, Mong Li Lee

Using adversarial training to defend against multiple types of perturbation requires expensive adversarial examples from different perturbation types at each training step.

Adversarial Robustness Image Classification

Improving Dirichlet Prior Network for Out-of-Distribution Example Detection

no code implementations25 Sep 2019 Jay Nandy

Predictive uncertainties can originate from the uncertainty in model parameters, data uncertainty or due to distributional mismatch between training and test examples.

Normal Similarity Network for Generative Modelling

no code implementations14 May 2018 Jay Nandy, Wynne Hsu, Mong Li Lee

Gaussian distributions are commonly used as a key building block in many generative models.

Density Estimation Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.