Search Results for author: Akshay Mehra

Found 12 papers, 5 papers with code

On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization

1 code implementation17 Jul 2023 Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm

To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.

Autonomous Driving Domain Generalization +1

Analysis of Task Transferability in Large Pre-trained Classifiers

no code implementations3 Jul 2023 Akshay Mehra, Yunbei Zhang, Jihun Hamm

We propose a novel Task Transfer Analysis approach that transforms the source distribution (and classifier) by changing the class prior distribution, label, and feature spaces to produce a new source distribution (and classifier) and allows us to relate the loss of the downstream task (i. e., transferability) to that of the source task.

Transfer Learning

Understanding the Robustness of Multi-Exit Models under Common Corruptions

no code implementations3 Dec 2022 Akshay Mehra, Skyler Seto, Navdeep Jaitly, Barry-John Theobald

Furthermore, the lack of calibration increases the inconsistency in the predictions of the model across exits, leading to both inefficient inference and more misclassifications compared with evaluation on in-distribution data.

On Certifying and Improving Generalization to Unseen Domains

1 code implementation24 Jun 2022 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.

Domain Generalization

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Benchmarking +1

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks

no code implementations15 Jun 2021 Byunggill Joe, Akshay Mehra, Insik Shin, Jihun Hamm

Electronic Health Records (EHRs) provide a wealth of information for machine learning algorithms to predict the patient outcome from the data including diagnostic information, vital signals, lab tests, drug administration, and demographic information.

BIG-bench Machine Learning Management +1

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness Bilevel Optimization +2

Penalty Method for Inversion-Free Deep Bilevel Optimization

2 code implementations8 Nov 2019 Akshay Mehra, Jihun Hamm

We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.

Bilevel Optimization Data Poisoning +2

Fast Interactive Image Retrieval using large-scale unlabeled data

no code implementations12 Feb 2018 Akshay Mehra, Jihun Hamm, Mikhail Belkin

Active learning reduces the number of user interactions by querying the labels of the most informative points and GSSL allows to use abundant unlabeled data along with the limited labeled data provided by the user.

Active Learning Binary Classification +2

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

no code implementations ICLR 2018 Jihun Hamm, Akshay Mehra

We demonstrate the minimax defense with two types of attack classes -- gradient-based and neural network-based attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.