no code implementations • 13 Dec 2024 • Jaehwan Jeong, Sumin In, Sieun Kim, Hannie Shin, Jongheon Jeong, Sang Ho Yoon, Jaewook Chung, Sangpil Kim
The rising use of deepfakes in criminal activities presents a significant issue, inciting widespread controversy.
1 code implementation • 13 Nov 2024 • Suhyeok Jang, Seojin Kim, Jinwoo Shin, Jongheon Jeong
We also find that such a fine-tuning can be done by updating a small fraction of parameters of the classifier.
1 code implementation • 9 Oct 2024 • Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, Saining Xie
Recent studies have shown that the denoising process in (generative) diffusion models can induce meaningful (discriminative) representations inside the model, though the quality of these representations still lags behind those learned through recent self-supervised learning methods.
Ranked #3 on Image Generation on ImageNet 256x256
1 code implementation • 8 Oct 2024 • June Suk Choi, Kyungmin Lee, Jongheon Jeong, Saining Xie, Jinwoo Shin, Kimin Lee
Through extensive experiments, we show that our method achieves stronger protection and improved mask robustness with lower computational costs compared to the strongest baseline.
1 code implementation • 26 Jul 2024 • Daewon Choi, Jongheon Jeong, Huiwon Jang, Jinwoo Shin
Adversarial robustness has been conventionally believed as a challenging property to encode for neural networks, requiring plenty of training data.
no code implementations • 10 Jun 2024 • Jiwoo Hong, Sayak Paul, Noah Lee, Kashif Rasul, James Thorne, Jongheon Jeong
In this paper, we focus on the alignment of recent text-to-image diffusion models, such as Stable Diffusion XL (SDXL), and find that this "reference mismatch" is indeed a significant problem in aligning these models due to the unstructured nature of visual modalities: e. g., a preference for a particular stylistic aspect can easily induce such a discrepancy.
1 code implementation • 2 Apr 2024 • KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee
To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.
1 code implementation • NeurIPS 2023 • Jongheon Jeong, Jinwoo Shin
Along with recent diffusion models, randomized smoothing has become one of a few tangible approaches that offers adversarial robustness to models at scale, e. g., those of large pre-trained models.
1 code implementation • 4 Jul 2023 • Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin
Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.
4 code implementations • CVPR 2023 • Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, Onkar Dabeer
Visual anomaly classification and segmentation are vital for automating industrial quality inspection.
Ranked #1 on zero-shot anomaly detection on MVTec AD
1 code implementation • CVPR 2023 • Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.
1 code implementation • 6 Mar 2023 • Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin
To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.
1 code implementation • 18 Dec 2022 • Jongheon Jeong, Seojin Kim, Jinwoo Shin
Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.
1 code implementation • 10 Aug 2022 • Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee
Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.
1 code implementation • 28 Jul 2022 • Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer
Visual anomaly detection is commonly used in industrial quality inspection.
Ranked #27 on Anomaly Detection on VisA
1 code implementation • NeurIPS 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
1 code implementation • 29 Jun 2021 • Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin
Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.
no code implementations • ICML Workshop AML 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
1 code implementation • ICLR 2021 • Jongheon Jeong, Jinwoo Shin
Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.
1 code implementation • ICML Workshop AML 2021 • Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.
1 code implementation • NeurIPS 2020 • Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin
Based on this, we propose a new detection score that is specific to the proposed training scheme.
1 code implementation • NeurIPS 2020 • Jongheon Jeong, Jinwoo Shin
A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.
1 code implementation • CVPR 2020 • Jaehyung Kim, Jongheon Jeong, Jinwoo Shin
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
Ranked #43 on Long-tail Learning on CIFAR-10-LT (ρ=10)
no code implementations • 11 May 2019 • Jongheon Jeong, Jinwoo Shin
Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.
no code implementations • ICLR 2019 • Jongheon Jeong, Jinwoo Shin
Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.