1 code implementation • 17 Jul 2023 • Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm
To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
no code implementations • 3 Jul 2023 • Akshay Mehra, Yunbei Zhang, Jihun Hamm
We propose a novel Task Transfer Analysis approach that transforms the source distribution (and classifier) by changing the class prior distribution, label, and feature spaces to produce a new source distribution (and classifier) and allows us to relate the loss of the downstream task (i. e., transferability) to that of the source task.
no code implementations • 3 Dec 2022 • Akshay Mehra, Skyler Seto, Navdeep Jaitly, Barry-John Theobald
Furthermore, the lack of calibration increases the inconsistency in the predictions of the model across exits, leading to both inefficient inference and more misclassifications compared with evaluation on in-distribution data.
1 code implementation • 24 Jun 2022 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.
no code implementations • 1 Dec 2021 • Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao
To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
1 code implementation • NeurIPS 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.
no code implementations • ICML Workshop AML 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
However, the limited effect of poisoning is restricted to the setting where training and test data are from the same distribution.
no code implementations • 15 Jun 2021 • Byunggill Joe, Akshay Mehra, Insik Shin, Jihun Hamm
Electronic Health Records (EHRs) provide a wealth of information for machine learning algorithms to predict the patient outcome from the data including diagnostic information, vital signals, lab tests, drug administration, and demographic information.
1 code implementation • CVPR 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.
2 code implementations • 8 Nov 2019 • Akshay Mehra, Jihun Hamm
We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.
no code implementations • 12 Feb 2018 • Akshay Mehra, Jihun Hamm, Mikhail Belkin
Active learning reduces the number of user interactions by querying the labels of the most informative points and GSSL allows to use abundant unlabeled data along with the limited labeled data provided by the user.
no code implementations • ICLR 2018 • Jihun Hamm, Akshay Mehra
We demonstrate the minimax defense with two types of attack classes -- gradient-based and neural network-based attacks.