2 code implementations • 6 Feb 2024 • Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun
Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities.
1 code implementation • 27 Nov 2023 • Yongjin Yang, Jongwoo Ko, Se-Young Yun
Vision-Language Models (VLMs) like CLIP have demonstrated remarkable applicability across a variety of downstream tasks, including zero-shot image classification.
no code implementations • 24 Oct 2023 • Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun
To tackle this issue, researchers have explored methods for Learning with Noisy Labels to identify clean samples and reduce the influence of noisy labels.
1 code implementation • 16 Oct 2023 • Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun
Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers.
1 code implementation • 9 Oct 2023 • Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun
To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates adaptive computation paths for each token based on the complexity of generating the subsequent token.
1 code implementation • 10 Feb 2023 • Sumyeong Ahn, Jongwoo Ko, Se-Young Yun
To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples.
Ranked #13 on Long-tail Learning on CIFAR-100-LT (ρ=10)
1 code implementation • 3 Feb 2023 • Jongwoo Ko, Seungjoon Park, Minchan Jeong, Sukjin Hong, Euijai Ahn, Du-Seong Chang, Se-Young Yun
Knowledge distillation (KD) is a highly promising method for mitigating the computational problems of pre-trained language models (PLMs).
1 code implementation • 18 Oct 2022 • Jaehoon Oh, Jongwoo Ko, Se-Young Yun
Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language data for training and (2) to generate the source language data from the target language data for inference.
1 code implementation • 15 Jun 2022 • Jongwoo Ko, Bongsoo Yi, Se-Young Yun
While existing methods address this problem in various directions, they still produce unpredictable sub-optimal results since they rely on the posterior information estimated by the feature extractor corrupted by noisy labels.
no code implementations • 29 Sep 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network.
1 code implementation • 29 Jun 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network.
1 code implementation • NeurIPS 2021 • Taehyeon Kim, Jongwoo Ko, Sangwook Cho, Jinhwan Choi, Se-Young Yun
Our framework, coined as filtering noisy instances via their eigenvectors (FINE), provides a robust detector with derivative-free simple methods having theoretical guarantees.
Ranked #2 on Image Classification on WebVision