Search Results for author: Sungnyun Kim

Found 13 papers, 8 papers with code

DistiLLM: Towards Streamlined Distillation for Large Language Models

2 code implementations6 Feb 2024 Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun

Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities.

Instruction Following Knowledge Distillation

STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models

no code implementations14 Dec 2023 Kangwook Jang, Sungnyun Kim, Hoirin Kim

Albeit great performance of Transformer-based speech selfsupervised learning (SSL) models, their large parameter size and computational cost make them unfavorable to utilize.

Relation Self-Supervised Learning

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

1 code implementation24 May 2023 Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn

In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.

Conditional Image Generation multimodal generation +1

Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation

1 code implementation19 May 2023 Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim

Transformer-based speech self-supervised learning (SSL) models, such as HuBERT, show surprising performance in various speech processing tasks.

Self-Supervised Learning

Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning

1 code implementation CVPR 2023 Sungnyun Kim, Sangmin Bae, Se-Young Yun

Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks.

Representation Learning Self-Supervised Learning

How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation

no code implementations13 May 2022 Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun

Next, we show that data augmentation cannot guarantee few-shot performance improvement and investigate the effectiveness of data augmentation based on the intensity of augmentation.

Data Augmentation Few-Shot Learning +1

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning

no code implementations11 May 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention.

cross-domain few-shot learning Transfer Learning

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty

2 code implementations1 Feb 2022 Jaehoon Oh, Sungnyun Kim, Namgyu Ho, Jin-Hwa Kim, Hwanjun Song, Se-Young Yun

This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.

cross-domain few-shot learning

Self-Contrastive Learning

no code implementations29 Sep 2021 Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network.

Contrastive Learning

Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network

1 code implementation29 Jun 2021 Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network.

Contrastive Learning

Task Calibration for Distributional Uncertainty in Few-Shot Classification

no code implementations1 Jan 2021 Sungnyun Kim, Se-Young Yun

As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty, though challenging, has been considered essential.

Classification General Classification +1

MixCo: Mix-up Contrastive Learning for Visual Representation

1 code implementation13 Oct 2020 Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun

Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation.

Contrastive Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.