Search Results for author: Jongheon Jeong

Found 18 papers, 14 papers with code

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

1 code implementation2 Apr 2024 KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee

To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.

Collaborative Score Distillation for Consistent Visual Synthesis

no code implementations4 Jul 2023 Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin

Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.

Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck

1 code implementation CVPR 2023 Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin

In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.

Adversarial Robustness Novelty Detection

Guiding Energy-based Models via Contrastive Latent Variables

1 code implementation6 Mar 2023 Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin

To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.

Representation Learning

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

1 code implementation18 Dec 2022 Jongheon Jeong, Seojin Kim, Jinwoo Shin

Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.

Adversarial Robustness

NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation

1 code implementation10 Aug 2022 Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee

Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.

Autonomous Driving Test-time Adaptation

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data

1 code implementation29 Jun 2021 Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin

Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.

Contrastive Learning Representation Learning

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

Training GANs with Stronger Augmentations via Contrastive Discriminator

1 code implementation ICLR 2021 Jongheon Jeong, Jinwoo Shin

Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.

Contrastive Learning Data Augmentation +1

Consistency Regularization for Adversarial Robustness

1 code implementation ICML Workshop AML 2021 Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.

Adversarial Robustness Data Augmentation

Consistency Regularization for Certified Robustness of Smoothed Classifiers

1 code implementation NeurIPS 2020 Jongheon Jeong, Jinwoo Shin

A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.

Adversarial Robustness

M2m: Imbalanced Classification via Major-to-minor Translation

1 code implementation CVPR 2020 Jaehyung Kim, Jongheon Jeong, Jinwoo Shin

In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.

Classification General Classification +3

Training CNNs with Selective Allocation of Channels

no code implementations11 May 2019 Jongheon Jeong, Jinwoo Shin

Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.

Selective Convolutional Units: Improving CNNs via Channel Selectivity

no code implementations ICLR 2019 Jongheon Jeong, Jinwoo Shin

Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.