no code implementations • 23 Nov 2023 • Dongjun Jang, Sangah Lee, Sungjoo Byun, Jinwoong Kim, Jean Seo, Minseok Kim, Soyeon Kim, Chaeyoung Oh, Jaeyoon Kim, Hyemi Jo, Hyopil Shin
This paper presents the DaG LLM (David and Goliath Large Language Model), a language model specialized for Korean and fine-tuned through Instruction Tuning across 41 tasks within 13 distinct categories.
1 code implementation • 14 Aug 2023 • Giorgio Fabbro, Stefan Uhlich, Chieh-Hsin Lai, Woosung Choi, Marco Martínez-Ramírez, WeiHsiang Liao, Igor Gadelha, Geraldo Ramos, Eddie Hsu, Hugo Rodrigues, Fabian-Robert Stöter, Alexandre Défossez, Yi Luo, Jianwei Yu, Dipam Chakraborty, Sharada Mohanty, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Nabarun Goswami, Tatsuya Harada, Minseok Kim, Jun Hyung Lee, Yuanliang Dong, Xinran Zhang, Jiafeng Liu, Yuki Mitsufuji
We propose a formalization of the errors that can occur in the design of a training dataset for MSS systems and introduce two new datasets that simulate such errors: SDXDB23_LabelNoise and SDXDB23_Bleeding1.
1 code implementation • 15 Jun 2023 • Minseok Kim, Jun Hyung Lee, Soonyoung Jung
In this report, we present our award-winning solutions for the Music Demixing Track of Sound Demixing Challenge 2023.
Ranked #4 on
Music Source Separation
on MUSDB18
no code implementations • 18 Aug 2022 • Minseok Kim, Jinoh Oh, Jaeyoung Do, Sungjin Lee
Graph neural networks (GNNs) have achieved remarkable success in recommender systems by representing users and items based on their historical interactions.
1 code implementation • 19 Mar 2022 • Minseok Kim, Hwanjun Song, Yooju Shin, Dongmin Park, Kijung Shin, Jae-Gil Lee
It is featured with an adaptive learning rate for each parameter-interaction pair for inducing a recommender to quickly learn users' up-to-date interest.
1 code implementation • NeurIPS 2021 • Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee
A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power.
1 code implementation • 24 Nov 2021 • Minseok Kim, Woosung Choi, Jaehwa Chung, Daewon Lee, Soonyoung Jung
This paper proposes a two-stream neural network for music demixing, called KUIELab-MDX-Net, which shows a good balance of performance and required resources.
Ranked #7 on
Music Source Separation
on MUSDB18
no code implementations • 27 Sep 2021 • Minseok Kim, Hoon Lee, Hongju Lee, Inkyu Lee
This paper studies a deep learning approach for binary assignment problems in wireless networks, which identifies binary variables for permutation matrices.
1 code implementation • 31 Aug 2021 • Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Défossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk
The main differences compared with the past challenges are 1) the competition is designed to more easily allow machine learning practitioners from other disciplines to participate, 2) evaluation is done on a hidden test set created by music professionals dedicated exclusively to the challenge to assure the transparency of the challenge, i. e., the test set is not accessible from anyone except the challenge organizers, and 3) the dataset provides a wider range of music genres and involved a greater number of mixing engineers.
1 code implementation • 28 Apr 2021 • Woosung Choi, Minseok Kim, Marco A. Martínez Ramírez, Jaehwa Chung, Soonyoung Jung
This paper proposes a neural network that performs audio transformations to user-specified sources (e. g., vocals) of a given audio track according to a given description while preserving other sources not mentioned in the description.
no code implementations • 8 Dec 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
In the seeding phase, the network is updated using all the samples to collect a seed of clean samples.
1 code implementation • 22 Oct 2020 • Woosung Choi, Minseok Kim, Jaehwa Chung, Soonyoung Jung
Recent deep-learning approaches have shown that Frequency Transformation (FT) blocks can significantly improve spectrogram-based single-source separation models by capturing frequency patterns.
Ranked #19 on
Music Source Separation
on MUSDB18
1 code implementation • 16 Jul 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data.
1 code implementation • 2 Dec 2019 • Woosung Choi, Minseok Kim, Jaehwa Chung, Daewon Lee, Soonyoung Jung
Singing Voice Separation (SVS) tries to separate singing voice from a given mixed musical signal.
no code implementations • 19 Nov 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
no code implementations • 19 Nov 2019 • Hwanjun Song, Minseok Kim, Sundong Kim, Jae-Gil Lee
Compared with existing batch selection methods, the results showed that Recency Bias reduced the test error by up to 20. 97% in a fixed wall-clock training time.
no code implementations • 25 Sep 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
1 code implementation • 15 Jun 2019 • Hwanjun Song, Minseok Kim, Jae-Gil Lee
Owing to the extremely high expressive power of deep neural networks, their side effect is to totally memorize training data even when the labels are extremely noisy.
Ranked #10 on
Learning with noisy labels
on ANIMAL
no code implementations • ICLR 2019 • Hwanjun Song, Sundong Kim, Minseok Kim, Jae-Gil Lee
Neural networks can converge faster with help from a smarter batch selection strategy.
no code implementations • 12 Nov 2016 • Jangho Lee, Gyuwan Kim, Jaeyoon Yoo, Changwoo Jung, Minseok Kim, Sungroh Yoon
Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy.