Search Results for author: Jongheon Jeong

Found 25 papers, 20 papers with code

Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness

1 code implementation13 Nov 2024 Suhyeok Jang, Seojin Kim, Jinwoo Shin, Jongheon Jeong

We also find that such a fine-tuning can be done by updating a small fraction of parameters of the classifier.

Adversarial Robustness Denoising +1

Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think

1 code implementation9 Oct 2024 Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, Saining Xie

Recent studies have shown that the denoising process in (generative) diffusion models can induce meaningful (discriminative) representations inside the model, though the quality of these representations still lags behind those learned through recent self-supervised learning methods.

Denoising Image Generation +1

DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing

1 code implementation8 Oct 2024 June Suk Choi, Kyungmin Lee, Jongheon Jeong, Saining Xie, Jinwoo Shin, Kimin Lee

Through extensive experiments, we show that our method achieves stronger protection and improved mask robustness with lower computational costs compared to the strongest baseline.

Image Manipulation

Adversarial Robustification via Text-to-Image Diffusion Models

1 code implementation26 Jul 2024 Daewon Choi, Jongheon Jeong, Huiwon Jang, Jinwoo Shin

Adversarial robustness has been conventionally believed as a challenging property to encode for neural networks, requiring plenty of training data.

Adversarial Robustness Zero-Shot Learning

Margin-aware Preference Optimization for Aligning Diffusion Models without Reference

no code implementations10 Jun 2024 Jiwoo Hong, Sayak Paul, Noah Lee, Kashif Rasul, James Thorne, Jongheon Jeong

In this paper, we focus on the alignment of recent text-to-image diffusion models, such as Stable Diffusion XL (SDXL), and find that this "reference mismatch" is indeed a significant problem in aligning these models due to the unstructured nature of visual modalities: e. g., a preference for a particular stylistic aspect can easily induce such a discrepancy.

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

1 code implementation2 Apr 2024 KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee

To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.

Multi-scale Diffusion Denoised Smoothing

1 code implementation NeurIPS 2023 Jongheon Jeong, Jinwoo Shin

Along with recent diffusion models, randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale, e. g., those of large pre-trained models.

Adversarial Robustness Denoising

Collaborative Score Distillation for Consistent Visual Synthesis

1 code implementation4 Jul 2023 Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin

Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.

Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck

1 code implementation CVPR 2023 Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin

In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.

Adversarial Robustness Novelty Detection

Guiding Energy-based Models via Contrastive Latent Variables

1 code implementation6 Mar 2023 Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin

To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.

Representation Learning

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

1 code implementation18 Dec 2022 Jongheon Jeong, Seojin Kim, Jinwoo Shin

Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.

Adversarial Robustness

NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation

1 code implementation10 Aug 2022 Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee

Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.

Autonomous Driving Test-time Adaptation

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data

1 code implementation29 Jun 2021 Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin

Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.

Contrastive Learning Representation Learning

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

Training GANs with Stronger Augmentations via Contrastive Discriminator

1 code implementation ICLR 2021 Jongheon Jeong, Jinwoo Shin

Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.

Contrastive Learning Data Augmentation +2

Consistency Regularization for Adversarial Robustness

1 code implementation ICML Workshop AML 2021 Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.

Adversarial Robustness Data Augmentation

Consistency Regularization for Certified Robustness of Smoothed Classifiers

1 code implementation NeurIPS 2020 Jongheon Jeong, Jinwoo Shin

A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.

Adversarial Robustness

M2m: Imbalanced Classification via Major-to-minor Translation

1 code implementation CVPR 2020 Jaehyung Kim, Jongheon Jeong, Jinwoo Shin

In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.

Classification Diversity +4

Training CNNs with Selective Allocation of Channels

no code implementations11 May 2019 Jongheon Jeong, Jinwoo Shin

Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.

Selective Convolutional Units: Improving CNNs via Channel Selectivity

no code implementations ICLR 2019 Jongheon Jeong, Jinwoo Shin

Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.