Search Results for author: Sangmin Bae

Found 13 papers, 10 papers with code

Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification

1 code implementation15 Dec 2023 June-Woo Kim, Sangmin Bae, Won-Yang Cho, Byungjo Lee, Ho-Young Jung

Despite the remarkable advances in deep learning technology, achieving satisfactory performance in lung sound classification remains a challenge due to the scarcity of available data.

Ranked #3 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)

Audio Classification Contrastive Learning +2

Fine-Tuning the Retrieval Mechanism for Tabular Deep Learning

no code implementations13 Nov 2023 Felix den Breejen, Sangmin Bae, Stephen Cha, Tae-Young Kim, Seoung Hyun Koh, Se-Young Yun

While interests in tabular deep learning has significantly grown, conventional tree-based models still outperform deep learning methods.

Retrieval Transfer Learning

Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance

1 code implementation11 Nov 2023 June-Woo Kim, Chihyeon Yoon, Miika Toikkanen, Sangmin Bae, Ho-Young Jung

In this work, we propose a straightforward approach to augment imbalanced respiratory sound data using an audio diffusion model as a conditional neural vocoder.

Ranked #2 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)

Audio Classification Sound Classification

Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding

1 code implementation9 Oct 2023 Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun

To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates adaptive computation paths for each token based on the complexity of generating the subsequent token.

Re-thinking Federated Active Learning based on Inter-class Diversity

1 code implementation CVPR 2023 Sangmook Kim, Sangmin Bae, Hwanjun Song, Se-Young Yun

In this work, we first demonstrate that the superiority of two selector models depends on the global and local inter-class diversity.

Active Learning Federated Learning

Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning

1 code implementation CVPR 2023 Sungnyun Kim, Sangmin Bae, Se-Young Yun

Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks.

Representation Learning Self-Supervised Learning

Self-Contrastive Learning

no code implementations29 Sep 2021 Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network.

Contrastive Learning

Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network

1 code implementation29 Jun 2021 Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun

To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network.

Contrastive Learning

Preservation of the Global Knowledge by Not-True Distillation in Federated Learning

2 code implementations6 Jun 2021 Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun

In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models.

Continual Learning Federated Learning +1

MixCo: Mix-up Contrastive Learning for Visual Representation

1 code implementation13 Oct 2020 Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun

Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation.

Contrastive Learning Self-Supervised Learning

SIPA: A Simple Framework for Efficient Networks

1 code implementation24 Apr 2020 Gihun Lee, Sangmin Bae, Jaehoon Oh, Se-Young Yun

With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices.

Math

Cannot find the paper you are looking for? You can Submit a new open access paper.