no code implementations • 13 Jun 2024 • Thomas Zemen, Jorge Gomez-Ponce, Aniruddha Chandra, Michael Walter, Enes Aksoy, Ruisi He, David Matolak, Minseok Kim, Jun-ichi Takada, Sana Salous, Reinaldo Valenzuela, Andreas F. Molisch
In this article, we focus on communication technologies for 5G and beyond that are increasingly able to exploit the specific environment geometry for both communication and sensing.
no code implementations • 4 Jun 2024 • Dehong Xu, Liang Qiu, Minseok Kim, Faisal Ladhak, Jaeyoung Do
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations.
no code implementations • 26 May 2024 • Yeachan Park, Minseok Kim, Yeoneung Kim
Focusing on the grokking phenomenon that arises in learning arithmetic binary operations via the transformer model, we begin with a discussion on data augmentation in the case of commutative binary operations.
no code implementations • 5 May 2024 • June-Woo Kim, Miika Toikkanen, Sangmin Bae, Minseok Kim, Ho-Young Jung
To address this, we propose RepAugment, an input-agnostic representation-level augmentation technique that outperforms SpecAugment, but is also suitable for respiratory sound classification with waveform pretrained models.
no code implementations • 24 Mar 2024 • Sungjoo Byun, Jiseung Hong, Sumin Park, Dongjun Jang, Jean Seo, Minseok Kim, Chaeyoung Oh, Hyopil Shin
Named Entity Recognition (NER) plays a pivotal role in medical Natural Language Processing (NLP).
Medical Named Entity Recognition named-entity-recognition +2
no code implementations • 15 Mar 2024 • Minseok Kim, Namjo Ahn, Song Min Kim
NR-Surface incorporates (i) a new extremely low-power (14KHz sampling) reconfiguration interface, NarrowBand Packet Unit (NBPU), for synchronization and real-time reconfiguration, and (ii) a highly responsive and low-leakage metasurface designed for low-duty cycled operation, by carefully leveraging the structure and the periodicity of the NR beam management procedure in the NR standard.
no code implementations • 23 Feb 2024 • Dongjun Jang, Jean Seo, Sungjoo Byun, Taekyoung Kim, Minseok Kim, Hyopil Shin
In order to tackle these challenges, we introduce CARBD-Ko (a Contextually Annotated Review Benchmark Dataset for Aspect-Based Sentiment Classification in Korean), a benchmark dataset that incorporates aspects and dual-tagged polarities to distinguish between aspect-specific and aspect-agnostic sentiment classification.
1 code implementation • 12 Dec 2023 • Hwanjun Song, Minseok Kim, Jae-Gil Lee
Multi-label classification poses challenges due to imbalanced and noisy labels in training data.
no code implementations • 23 Nov 2023 • Dongjun Jang, Sangah Lee, Sungjoo Byun, Jinwoong Kim, Jean Seo, Minseok Kim, Soyeon Kim, Chaeyoung Oh, Jaeyoon Kim, Hyemi Jo, Hyopil Shin
This paper presents the DaG LLM (David and Goliath Large Language Model), a language model specialized for Korean and fine-tuned through Instruction Tuning across 41 tasks within 13 distinct categories.
2 code implementations • 14 Aug 2023 • Giorgio Fabbro, Stefan Uhlich, Chieh-Hsin Lai, Woosung Choi, Marco Martínez-Ramírez, WeiHsiang Liao, Igor Gadelha, Geraldo Ramos, Eddie Hsu, Hugo Rodrigues, Fabian-Robert Stöter, Alexandre Défossez, Yi Luo, Jianwei Yu, Dipam Chakraborty, Sharada Mohanty, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Nabarun Goswami, Tatsuya Harada, Minseok Kim, Jun Hyung Lee, Yuanliang Dong, Xinran Zhang, Jiafeng Liu, Yuki Mitsufuji
We propose a formalization of the errors that can occur in the design of a training dataset for MSS systems and introduce two new datasets that simulate such errors: SDXDB23_LabelNoise and SDXDB23_Bleeding.
1 code implementation • 15 Jun 2023 • Minseok Kim, Jun Hyung Lee, Soonyoung Jung
In this report, we present our award-winning solutions for the Music Demixing Track of Sound Demixing Challenge 2023.
Ranked #4 on Music Source Separation on MUSDB18
no code implementations • 18 Aug 2022 • Minseok Kim, Jinoh Oh, Jaeyoung Do, Sungjin Lee
Graph neural networks (GNNs) have achieved remarkable success in recommender systems by representing users and items based on their historical interactions.
1 code implementation • 19 Mar 2022 • Minseok Kim, Hwanjun Song, Yooju Shin, Dongmin Park, Kijung Shin, Jae-Gil Lee
It is featured with an adaptive learning rate for each parameter-interaction pair for inducing a recommender to quickly learn users' up-to-date interest.
1 code implementation • NeurIPS 2021 • Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee
A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power.
1 code implementation • 24 Nov 2021 • Minseok Kim, Woosung Choi, Jaehwa Chung, Daewon Lee, Soonyoung Jung
This paper proposes a two-stream neural network for music demixing, called KUIELab-MDX-Net, which shows a good balance of performance and required resources.
Ranked #7 on Music Source Separation on MUSDB18
no code implementations • 27 Sep 2021 • Minseok Kim, Hoon Lee, Hongju Lee, Inkyu Lee
This paper studies a deep learning approach for binary assignment problems in wireless networks, which identifies binary variables for permutation matrices.
1 code implementation • 31 Aug 2021 • Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Défossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk
The main differences compared with the past challenges are 1) the competition is designed to more easily allow machine learning practitioners from other disciplines to participate, 2) evaluation is done on a hidden test set created by music professionals dedicated exclusively to the challenge to assure the transparency of the challenge, i. e., the test set is not accessible from anyone except the challenge organizers, and 3) the dataset provides a wider range of music genres and involved a greater number of mixing engineers.
1 code implementation • 28 Apr 2021 • Woosung Choi, Minseok Kim, Marco A. Martínez Ramírez, Jaehwa Chung, Soonyoung Jung
This paper proposes a neural network that performs audio transformations to user-specified sources (e. g., vocals) of a given audio track according to a given description while preserving other sources not mentioned in the description.
no code implementations • 8 Dec 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
In the seeding phase, the network is updated using all the samples to collect a seed of clean samples.
1 code implementation • 22 Oct 2020 • Woosung Choi, Minseok Kim, Jaehwa Chung, Soonyoung Jung
Recent deep-learning approaches have shown that Frequency Transformation (FT) blocks can significantly improve spectrogram-based single-source separation models by capturing frequency patterns.
Ranked #20 on Music Source Separation on MUSDB18
1 code implementation • 16 Jul 2020 • Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
Deep learning has achieved remarkable success in numerous domains with help from large amounts of big data.
1 code implementation • 2 Dec 2019 • Woosung Choi, Minseok Kim, Jaehwa Chung, Daewon Lee, Soonyoung Jung
Singing Voice Separation (SVS) tries to separate singing voice from a given mixed musical signal.
no code implementations • 19 Nov 2019 • Hwanjun Song, Minseok Kim, Sundong Kim, Jae-Gil Lee
Compared with existing batch selection methods, the results showed that Recency Bias reduced the test error by up to 20. 97% in a fixed wall-clock training time.
no code implementations • 19 Nov 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
no code implementations • 25 Sep 2019 • Hwanjun Song, Minseok Kim, Dongmin Park, Jae-Gil Lee
In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized.
1 code implementation • 15 Jun 2019 • Hwanjun Song, Minseok Kim, Jae-Gil Lee
Owing to the extremely high expressive power of deep neural networks, their side effect is to totally memorize training data even when the labels are extremely noisy.
Ranked #14 on Learning with noisy labels on ANIMAL
no code implementations • ICLR 2019 • Hwanjun Song, Sundong Kim, Minseok Kim, Jae-Gil Lee
Neural networks can converge faster with help from a smarter batch selection strategy.
no code implementations • 12 Nov 2016 • Jangho Lee, Gyuwan Kim, Jaeyoon Yoo, Changwoo Jung, Minseok Kim, Sungroh Yoon
Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy.