1 code implementation • 17 Jul 2023 • Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm
To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
no code implementations • 6 Jul 2023 • Janet Wang, Yunbei Zhang, Zhengming Ding, Jihun Hamm
The adoption of UDA with multiple sources can simultaneously enrich the training set and bridge the domain gap between different skin lesion datasets, which vary due to distinct acquisition protocols.
no code implementations • 3 Jul 2023 • Akshay Mehra, Yunbei Zhang, Jihun Hamm
We propose a novel Task Transfer Analysis approach that transforms the source distribution (and classifier) by changing the class prior distribution, label, and feature spaces to produce a new source distribution (and classifier) and allows us to relate the loss of the downstream task (i. e., transferability) to that of the source task.
1 code implementation • 27 Jun 2023 • Yunsung Chung, Chanho Lim, Chao Huang, Nassir Marrouche, Jihun Hamm
Specifically, we leverage the contrastive loss to learn representations of both the foreground and background regions in the images.
no code implementations • 8 Jul 2022 • Byunggill Joe, Insik Shin, Jihun Hamm
Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for.
1 code implementation • 24 Jun 2022 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.
no code implementations • 1 Dec 2021 • Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao
To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
1 code implementation • NeurIPS 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.
no code implementations • ICML Workshop AML 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
However, the limited effect of poisoning is restricted to the setting where training and test data are from the same distribution.
no code implementations • 15 Jun 2021 • Byunggill Joe, Akshay Mehra, Insik Shin, Jihun Hamm
Electronic Health Records (EHRs) provide a wealth of information for machine learning algorithms to predict the patient outcome from the data including diagnostic information, vital signals, lab tests, drug administration, and demographic information.
no code implementations • 7 Dec 2020 • Byunggill Joe, Jihun Hamm, Sung Ju Hwang, Sooel Son, Insik Shin
Although deep neural networks have shown promising performances on various tasks, they are susceptible to incorrect predictions induced by imperceptibly small perturbations in inputs.
1 code implementation • CVPR 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.
2 code implementations • 8 Nov 2019 • Akshay Mehra, Jihun Hamm
We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.
1 code implementation • ICML 2018 • Jihun Hamm, Yung-Kyun Noh
Minimax optimization plays a key role in adversarial training of machine learning algorithms, such as learning generative models, domain adaptation, privacy preservation, and robust learning.
no code implementations • 12 Feb 2018 • Akshay Mehra, Jihun Hamm, Mikhail Belkin
Active learning reduces the number of user interactions by querying the labels of the most informative points and GSSL allows to use abundant unlabeled data along with the limited labeled data provided by the user.
no code implementations • ICLR 2018 • Jihun Hamm, Akshay Mehra
We demonstrate the minimax defense with two types of attack classes -- gradient-based and neural network-based attacks.
1 code implementation • 12 Oct 2016 • Jihun Hamm
The paper proposes a novel filter-based mechanism which preserves privacy of continuous and high-dimensional attributes against inference attacks.
no code implementations • 10 Feb 2016 • Jihun Hamm, Paul Cao, Mikhail Belkin
How can we build an accurate and differentially private global classifier by combining locally-trained classifiers from different parties, without access to any party's private data?
no code implementations • 27 Feb 2015 • Jihun Hamm, Mikhail Belkin
In this paper we propose a non-metric ranking-based representation of semantic similarity that allows natural aggregation of semantic information from multiple heterogeneous sources.
no code implementations • 11 Jan 2015 • Jihun Hamm, Adam Champion, Guoxing Chen, Mikhail Belkin, Dong Xuan
Smart devices with built-in sensors, computational capabilities, and network connectivity have become increasingly pervasive.
no code implementations • NeurIPS 2008 • Jihun Hamm, Daniel D. Lee
Subspace-based learning problems involve data whose elements are linear subspaces of a vector space.