Search Results for author: Seungju Han

Found 18 papers, 10 papers with code

Meta Variance Transfer: Learning to Augment from the Others

no code implementations ICML 2020 Seong-Jin Park, Seungju Han, Ji-won Baek, Insoo Kim, Juhwan Song, Hae Beom Lee, Jae-Joon Han, Sung Ju Hwang

Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformation, background noise, and change in lighting conditions.

Face Recognition Meta-Learning +1

SMILE: Multimodal Dataset for Understanding Laughter in Video with Language Models

1 code implementation15 Dec 2023 Lee Hyun, Kim Sung-Bin, Seungju Han, Youngjae Yu, Tae-Hyun Oh

We introduce this new task to explain why people laugh in a particular video and a dataset for this task.

Video Understanding

Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms

1 code implementation16 Oct 2023 Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi, Youngjae Yu

NORMLENS consists of 10K human judgments accompanied by free-form explanations covering 2K multimodal situations, and serves as a probe to address two questions: (1) to what extent can models align with average human judgment?

Rethinking Feature-Based Knowledge Distillation for Face Recognition

no code implementations CVPR 2023 Jingzhi Li, Zidong Guo, Hui Li, Seungju Han, Ji-won Baek, Min Yang, Ran Yang, Sungjoo Suh

By constraining the teacher's search space with reverse distillation, we narrow the intrinsic gap and unleash the potential of feature-only distillation.

Face Recognition Knowledge Distillation

CORE: Co-planarity Regularized Monocular Geometry Estimation with Weak Supervision

no code implementations ICCV 2023 Yuguang Li, Kai Wang, Hui Li, Seon-Min Rhee, Seungju Han, JiHye Kim, Min Yang, Ran Yang, Feng Zhu

Meanwhile, SANE easily establishes multi-task learning with CORE loss functions on both depth and surface normal estimation, leading to the whole performance leap.

Depth Estimation Multi-Task Learning +2

Sample-wise Label Confidence Incorporation for Learning with Noisy Labels

no code implementations ICCV 2023 Chanho Ahn, Kikyung Kim, Ji-won Baek, Jongin Lim, Seungju Han

Although recent studies on designing a robust objective function to label noise, known as the robust loss method, have shown promising results for learning with noisy labels, they suffer from the issue of underfitting not only noisy samples but also clean ones, leading to suboptimal model performance.

Learning with noisy labels

BiasAdv: Bias-Adversarial Augmentation for Model Debiasing

no code implementations CVPR 2023 Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han

Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.

Adversarial Attack Data Augmentation

Measuring and Improving Semantic Diversity of Dialogue Generation

1 code implementation11 Oct 2022 Seungju Han, Beomsu Kim, Buru Chang

In this paper, we introduce a new automatic evaluation metric to measure the semantic diversity of generated responses.

Dialogue Generation

Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances

1 code implementation NAACL 2022 Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, SangBum Kim, Enkhbayar Erdenee, Buru Chang

To better reflect the style of the character, PDP builds the prompts in the form of dialog that includes the character's utterances as dialog history.

Chatbot Retrieval

Pushing the Performance Limit of Scene Text Recognizer without Human Annotation

1 code implementation CVPR 2022 Caiyuan Zheng, Hui Li, Seon-Min Rhee, Seungju Han, Jae-Joon Han, Peng Wang

A robust consistency regularization based semi-supervised framework is proposed for STR, which can effectively solve the instability issue due to domain inconsistency between synthetic and real images.

Scene Text Recognition

Understanding and Improving the Exemplar-based Generation for Open-domain Conversation

1 code implementation NLP4ConvAI (ACL) 2022 Seungju Han, Beomsu Kim, Seokjun Seo, Enkhbayar Erdenee, Buru Chang

Extensive experiments demonstrate that our proposed training method alleviates the drawbacks of the existing exemplar-based generative models and significantly improves the performance in terms of appropriateness and informativeness.

Informativeness Retrieval

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

1 code implementation Findings (EMNLP) 2021 Beomsu Kim, Seokjun Seo, Seungju Han, Enkhbayar Erdenee, Buru Chang

G2R consists of two distinct techniques of distillation: the data-level G2R augments the dialogue dataset with additional responses generated by the large-scale generative model, and the model-level G2R transfers the response quality score assessed by the generative model to the score of the retrieval model by the knowledge distillation loss.

Knowledge Distillation Retrieval

Quality-Agnostic Image Recognition via Invertible Decoder

1 code implementation CVPR 2021 Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin

Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.

Data Augmentation Domain Generalization +2

Self-Reorganizing and Rejuvenating CNNs for Increasing Model Capacity Utilization

no code implementations13 Feb 2021 Wissam J. Baddar, Seungju Han, Seonmin Rhee, Jae-Joon Han

In this paper, we propose self-reorganizing and rejuvenating convolutional neural networks; a biologically inspired method for improving the computational resource utilization of neural networks.

Disentangling Label Distribution for Long-tailed Visual Recognition

2 code implementations CVPR 2021 Youngkyu Hong, Seungju Han, Kwanghee Choi, Seokjun Seo, Beomsu Kim, Buru Chang

Although this method surpasses state-of-the-art methods on benchmark datasets, it can be further improved by directly disentangling the source label distribution from the model prediction in the training phase.

Image Classification Long-tail Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.