1 code implementation • 2 Apr 2024 • KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee
To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.
no code implementations • 4 Jul 2023 • Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin
Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.
5 code implementations • CVPR 2023 • Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, Onkar Dabeer
Visual anomaly classification and segmentation are vital for automating industrial quality inspection.
Ranked #9 on Anomaly Detection on VisA
1 code implementation • CVPR 2023 • Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.
1 code implementation • 6 Mar 2023 • Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin
To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.
1 code implementation • 18 Dec 2022 • Jongheon Jeong, Seojin Kim, Jinwoo Shin
Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.
1 code implementation • 10 Aug 2022 • Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee
Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.
1 code implementation • 28 Jul 2022 • Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer
Visual anomaly detection is commonly used in industrial quality inspection.
Ranked #15 on Anomaly Detection on VisA (Detection AUROC metric)
1 code implementation • NeurIPS 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
1 code implementation • 29 Jun 2021 • Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin
Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.
no code implementations • ICML Workshop AML 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
1 code implementation • ICLR 2021 • Jongheon Jeong, Jinwoo Shin
Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.
1 code implementation • ICML Workshop AML 2021 • Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.
1 code implementation • NeurIPS 2020 • Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin
Based on this, we propose a new detection score that is specific to the proposed training scheme.
1 code implementation • NeurIPS 2020 • Jongheon Jeong, Jinwoo Shin
A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.
1 code implementation • CVPR 2020 • Jaehyung Kim, Jongheon Jeong, Jinwoo Shin
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
Ranked #43 on Long-tail Learning on CIFAR-10-LT (ρ=10)
no code implementations • 11 May 2019 • Jongheon Jeong, Jinwoo Shin
Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.
no code implementations • ICLR 2019 • Jongheon Jeong, Jinwoo Shin
Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.