2 code implementations • 6 Feb 2024 • Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun
Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities.
1 code implementation • 14 Dec 2023 • Kangwook Jang, Sungnyun Kim, Hoirin Kim
Albeit great performance of Transformer-based speech selfsupervised learning (SSL) models, their large parameter size and computational cost make them unfavorable to utilize.
1 code implementation • 24 May 2023 • Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn
In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.
1 code implementation • 23 May 2023 • Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun
Respiratory sound contains crucial information for the early diagnosis of fatal lung diseases.
Ranked #1 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)
1 code implementation • 19 May 2023 • Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim
Transformer-based speech self-supervised learning (SSL) models, such as HuBERT, show surprising performance in various speech processing tasks.
1 code implementation • CVPR 2023 • Sungnyun Kim, Sangmin Bae, Se-Young Yun
Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks.
no code implementations • 13 May 2022 • Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun
Next, we show that data augmentation cannot guarantee few-shot performance improvement and investigate the effectiveness of data augmentation based on the intensity of augmentation.
no code implementations • 11 May 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.
2 code implementations • 1 Feb 2022 • Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
no code implementations • 29 Sep 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network.
1 code implementation • 29 Jun 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network.
no code implementations • 1 Jan 2021 • Sungnyun Kim, Se-Young Yun
As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty, though challenging, has been considered essential.
1 code implementation • 13 Oct 2020 • Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun
Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation.