Search Results for author: Jihun Hamm

Found 21 papers, 8 papers with code

On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization

1 code implementation17 Jul 2023 Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm

To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.

Autonomous Driving Domain Generalization +1

Achieving Reliable and Fair Skin Lesion Diagnosis via Unsupervised Domain Adaptation

no code implementations6 Jul 2023 Janet Wang, Yunbei Zhang, Zhengming Ding, Jihun Hamm

The adoption of UDA with multiple sources can simultaneously enrich the training set and bridge the domain gap between different skin lesion datasets, which vary due to distinct acquisition protocols.

Binary Classification Fairness +4

Analysis of Task Transferability in Large Pre-trained Classifiers

no code implementations3 Jul 2023 Akshay Mehra, Yunbei Zhang, Jihun Hamm

We propose a novel Task Transfer Analysis approach that transforms the source distribution (and classifier) by changing the class prior distribution, label, and feature spaces to produce a new source distribution (and classifier) and allows us to relate the loss of the downstream task (i. e., transferability) to that of the source task.

Transfer Learning

FBA-Net: Foreground and Background Aware Contrastive Learning for Semi-Supervised Atrium Segmentation

1 code implementation27 Jun 2023 Yunsung Chung, Chanho Lim, Chao Huang, Nassir Marrouche, Jihun Hamm

Specifically, we leverage the contrastive loss to learn representations of both the foreground and background regions in the images.

Contrastive Learning Image Segmentation +3

Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future

no code implementations8 Jul 2022 Byunggill Joe, Insik Shin, Jihun Hamm

Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for.

Autonomous Driving

On Certifying and Improving Generalization to Unseen Domains

1 code implementation24 Jun 2022 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.

Domain Generalization

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Benchmarking +1

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks

no code implementations15 Jun 2021 Byunggill Joe, Akshay Mehra, Insik Shin, Jihun Hamm

Electronic Health Records (EHRs) provide a wealth of information for machine learning algorithms to predict the patient outcome from the data including diagnostic information, vital signals, lab tests, drug administration, and demographic information.

BIG-bench Machine Learning Management +1

Learning to Separate Clusters of Adversarial Representations for Robust Adversarial Detection

no code implementations7 Dec 2020 Byunggill Joe, Jihun Hamm, Sung Ju Hwang, Sooel Son, Insik Shin

Although deep neural networks have shown promising performances on various tasks, they are susceptible to incorrect predictions induced by imperceptibly small perturbations in inputs.

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness Bilevel Optimization +2

Penalty Method for Inversion-Free Deep Bilevel Optimization

2 code implementations8 Nov 2019 Akshay Mehra, Jihun Hamm

We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.

Bilevel Optimization Data Poisoning +2

K-Beam Minimax: Efficient Optimization for Deep Adversarial Learning

1 code implementation ICML 2018 Jihun Hamm, Yung-Kyun Noh

Minimax optimization plays a key role in adversarial training of machine learning algorithms, such as learning generative models, domain adaptation, privacy preservation, and robust learning.

Domain Adaptation

Fast Interactive Image Retrieval using large-scale unlabeled data

no code implementations12 Feb 2018 Akshay Mehra, Jihun Hamm, Mikhail Belkin

Active learning reduces the number of user interactions by querying the labels of the most informative points and GSSL allows to use abundant unlabeled data along with the limited labeled data provided by the user.

Active Learning Binary Classification +2

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

no code implementations ICLR 2018 Jihun Hamm, Akshay Mehra

We demonstrate the minimax defense with two types of attack classes -- gradient-based and neural network-based attacks.

Minimax Filter: Learning to Preserve Privacy from Inference Attacks

1 code implementation12 Oct 2016 Jihun Hamm

The paper proposes a novel filter-based mechanism which preserves privacy of continuous and high-dimensional attributes against inference attacks.

Classification Emotion Classification +1

Learning Privately from Multiparty Data

no code implementations10 Feb 2016 Jihun Hamm, Paul Cao, Mikhail Belkin

How can we build an accurate and differentially private global classifier by combining locally-trained classifiers from different parties, without access to any party's private data?

Activity Recognition Network Intrusion Detection

Probabilistic Zero-shot Classification with Semantic Rankings

no code implementations27 Feb 2015 Jihun Hamm, Mikhail Belkin

In this paper we propose a non-metric ranking-based representation of semantic similarity that allows natural aggregation of semantic information from multiple heterogeneous sources.

Classification General Classification +3

Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices

no code implementations11 Jan 2015 Jihun Hamm, Adam Champion, Guoxing Chen, Mikhail Belkin, Dong Xuan

Smart devices with built-in sensors, computational capabilities, and network connectivity have become increasingly pervasive.

Privacy Preserving

Extended Grassmann Kernels for Subspace-Based Learning

no code implementations NeurIPS 2008 Jihun Hamm, Daniel D. Lee

Subspace-based learning problems involve data whose elements are linear subspaces of a vector space.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.