no code implementations • ICML 2020 • Seong-Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, Sung Ju Hwang
Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformation, background noise, and change in lighting conditions.
1 code implementation • 27 Jun 2024 • Jiwan Chung, Sungjae Lee, Minseo Kim, Seungju Han, Ashkan Yousefpour, Jack Hessel, Youngjae Yu
Understanding these arguments requires selective vision: only specific visual stimuli within an image are relevant to the argument, and relevance can only be understood within the context of a broader argumentative structure.
1 code implementation • 26 Jun 2024 • Liwei Jiang, Kavel Rao, Seungju Han, Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, Nouha Dziri
As WildJailbreak considerably upgrades the quality and scale of existing safety resources, it uniquely enables us to examine the scaling effects of data and the interplay of data properties and model capabilities during safety training.
1 code implementation • 26 Jun 2024 • Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri
We introduce WildGuard -- an open, light-weight moderation tool for LLM safety that achieves three goals: (1) identifying malicious intent in user prompts, (2) detecting safety risks of model responses, and (3) determining model refusal rate.
no code implementations • 20 Jun 2024 • Seungbeen Lee, Seungwon Lim, Seungju Han, Giyeong Oh, Hyungjoo Chae, Jiwan Chung, Minju Kim, Beong-woo Kwak, Yeonsoo Lee, Dongha Lee, Jinyoung Yeo, Youngjae Yu
The idea of personality in descriptive psychology, traditionally defined through observable behavior, has now been extended to Large Language Models (LLMs) to better understand their behavior.
2 code implementations • 15 Dec 2023 • Lee Hyun, Kim Sung-Bin, Seungju Han, Youngjae Yu, Tae-Hyun Oh
We introduce this new task to explain why people laugh in a particular video and a dataset for this task.
1 code implementation • 16 Oct 2023 • Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi, Youngjae Yu
NORMLENS consists of 10K human judgments accompanied by free-form explanations covering 2K multimodal situations, and serves as a probe to address two questions: (1) to what extent can models align with average human judgment?
1 code implementation • ICCV 2023 • Seungju Han, Jack Hessel, Nouha Dziri, Yejin Choi, Youngjae Yu
To train CHAMPAGNE, we collect and release YTD-18M, a large-scale corpus of 18M video-based dialogues.
no code implementations • CVPR 2023 • Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han
Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.
no code implementations • ICCV 2023 • Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han
Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.
no code implementations • ICCV 2023 • Yuguang Li, Kai Wang, Hui Li, Seon-Min Rhee, Seungju Han, JiHye Kim, Min Yang, Ran Yang, Feng Zhu
Meanwhile, SANE easily establishes multi-task learning with CORE loss functions on both depth and surface normal estimation, leading to the whole performance leap.
no code implementations • CVPR 2023 • Jingzhi Li, Zidong Guo, Hui Li, Seungju Han, Ji-won Baek, Min Yang, Ran Yang, Sungjoo Suh
By constraining the teacher's search space with reverse distillation, we narrow the intrinsic gap and unleash the potential of feature-only distillation.
1 code implementation • 11 Oct 2022 • Seungju Han, Beomsu Kim, Buru Chang
In this paper, we introduce a new automatic evaluation metric to measure the semantic diversity of generated responses.
no code implementations • CVPR 2022 • Hui Li, Zidong Guo, Seon-Min Rhee, Seungju Han, Jae-Joon Han
We formulate facial landmark detection as a coordinate regression task such that the model can be trained end-to-end.
Ranked #2 on Face Alignment on COFW
1 code implementation • NAACL 2022 • Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, SangBum Kim, Enkhbayar Erdenee, Buru Chang
To better reflect the style of the character, PDP builds the prompts in the form of dialog that includes the character's utterances as dialog history.
1 code implementation • CVPR 2022 • Caiyuan Zheng, Hui Li, Seon-Min Rhee, Seungju Han, Jae-Joon Han, Peng Wang
A robust consistency regularization based semi-supervised framework is proposed for STR, which can effectively solve the instability issue due to domain inconsistency between synthetic and real images.
1 code implementation • NLP4ConvAI (ACL) 2022 • Seungju Han, Beomsu Kim, Seokjun Seo, Enkhbayar Erdenee, Buru Chang
Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.
1 code implementation • Findings (EMNLP) 2021 • Beomsu Kim, Seokjun Seo, Seungju Han, Enkhbayar Erdenee, Buru Chang
G2R consists of two distinct techniques of distillation: the data-level G2R augments the dialogue dataset with additional responses generated by the large-scale generative model, and the model-level G2R transfers the response quality score assessed by the generative model to the score of the retrieval model by the knowledge distillation loss.
1 code implementation • CVPR 2021 • Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin
Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.
Ranked #17 on Domain Generalization on ImageNet-C
no code implementations • 13 Feb 2021 • Wissam J. Baddar, Seungju Han, Seonmin Rhee, Jae-Joon Han
In this paper, we propose self-reorganizing and rejuvenating convolutional neural networks; a biologically inspired method for improving the computational resource utilization of neural networks.
2 code implementations • CVPR 2021 • Youngkyu Hong, Seungju Han, Kwanghee Choi, Seokjun Seo, Beomsu Kim, Buru Chang
Although this method surpasses state-of-the-art methods on benchmark datasets, it can be further improved by directly disentangling the source label distribution from the model prediction in the training phase.
Ranked #21 on Long-tail Learning on CIFAR-100-LT (ρ=10)
no code implementations • Asian Conference on Computer Vision (ACCV) 2020 • Insoo Kim, Seungju Han, Seong-Jin Park, Ji-won Baek, Jinwoo Shin, Jae-Joon Han, Changkyu Choi
Softmax-based learning methods have shown state-of-the-art performances on large-scale face recognition tasks.
Ranked #1 on Face Verification on CALFW