1 code implementation • 15 Dec 2023 • June-Woo Kim, Sangmin Bae, Won-Yang Cho, Byungjo Lee, Ho-Young Jung
Despite the remarkable advances in deep learning technology, achieving satisfactory performance in lung sound classification remains a challenge due to the scarcity of available data.
Ranked #3 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)
no code implementations • 13 Nov 2023 • Felix den Breejen, Sangmin Bae, Stephen Cha, Tae-Young Kim, Seoung Hyun Koh, Se-Young Yun
While interests in tabular deep learning has significantly grown, conventional tree-based models still outperform deep learning methods.
1 code implementation • 11 Nov 2023 • June-Woo Kim, Chihyeon Yoon, Miika Toikkanen, Sangmin Bae, Ho-Young Jung
In this work, we propose a straightforward approach to augment imbalanced respiratory sound data using an audio diffusion model as a conditional neural vocoder.
Ranked #2 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)
1 code implementation • 9 Oct 2023 • Sangmin Bae, Jongwoo Ko, Hwanjun Song, Se-Young Yun
To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates adaptive computation paths for each token based on the complexity of generating the subsequent token.
1 code implementation • 23 May 2023 • Sangmin Bae, June-Woo Kim, Won-Yang Cho, Hyerim Baek, Soyoun Son, Byungjo Lee, Changwan Ha, Kyongpil Tae, Sungnyun Kim, Se-Young Yun
Respiratory sound contains crucial information for the early diagnosis of fatal lung diseases.
Ranked #1 on Audio Classification on ICBHI Respiratory Sound Database (using extra training data)
1 code implementation • CVPR 2023 • Sangmook Kim, Sangmin Bae, Hwanjun Song, Se-Young Yun
In this work, we first demonstrate that the superiority of two selector models depends on the global and local inter-class diversity.
1 code implementation • CVPR 2023 • Sungnyun Kim, Sangmin Bae, Se-Young Yun
Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks.
no code implementations • 29 Sep 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network.
1 code implementation • 29 Jun 2021 • Sangmin Bae, Sungnyun Kim, Jongwoo Ko, Gihun Lee, Seungjong Noh, Se-Young Yun
To this end, we propose Self-Contrastive (SelfCon) learning, which self-contrasts within multiple outputs from the different levels of a single network.
2 code implementations • 6 Jun 2021 • Gihun Lee, Minchan Jeong, Yongjin Shin, Sangmin Bae, Se-Young Yun
In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models.
no code implementations • 6 Dec 2020 • Taehyeon Kim, Sangmin Bae, Jin-woo Lee, Seyoung Yun
Federated learning has emerged as an innovative paradigm of collaborative machine learning.
1 code implementation • 13 Oct 2020 • Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun
Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation.
1 code implementation • 24 Apr 2020 • Gihun Lee, Sangmin Bae, Jaehoon Oh, Se-Young Yun
With the success of deep learning in various fields and the advent of numerous Internet of Things (IoT) devices, it is essential to lighten models suitable for low-power devices.