Search Results for author: Soonyoung Jung

Found 7 papers, 5 papers with code

Sound Demixing Challenge 2023 Music Demixing Track Technical Report: TFC-TDF-UNet v3

1 code implementation15 Jun 2023 Minseok Kim, Jun Hyung Lee, Soonyoung Jung

In this report, we present our award-winning solutions for the Music Demixing Track of Sound Demixing Challenge 2023.

Music Source Separation

Learning source-aware representations of music in a discrete latent space

no code implementations26 Nov 2021 Jinsung Kim, Yeong-Seok Jeong, Woosung Choi, Jaehwa Chung, Soonyoung Jung

To address this issue, we propose a novel method to learn source-awarelatent representations of music through Vector-Quantized Variational Auto-Encoder(VQ-VAE). We train our VQ-VAE to encode an input mixture into a tensor of integers in a discrete latentspace, and design them to have a decomposed structure which allows humans to manipulatethe latent vector in a source-aware manner.

LightSAFT: Lightweight Latent Source Aware Frequency Transform for Source Separation

no code implementations24 Nov 2021 Yeong-Seok Jeong, Jinsung Kim, Woosung Choi, Jaehwa Chung, Soonyoung Jung

Conditioned source separations have attracted significant attention because of their flexibility, applicability and extensionality.

KUIELab-MDX-Net: A Two-Stream Neural Network for Music Demixing

1 code implementation24 Nov 2021 Minseok Kim, Woosung Choi, Jaehwa Chung, Daewon Lee, Soonyoung Jung

This paper proposes a two-stream neural network for music demixing, called KUIELab-MDX-Net, which shows a good balance of performance and required resources.

Music Source Separation Vocal Bursts Valence Prediction

AMSS-Net: Audio Manipulation on User-Specified Sources with Textual Queries

1 code implementation28 Apr 2021 Woosung Choi, Minseok Kim, Marco A. Martínez Ramírez, Jaehwa Chung, Soonyoung Jung

This paper proposes a neural network that performs audio transformations to user-specified sources (e. g., vocals) of a given audio track according to a given description while preserving other sources not mentioned in the description.

LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation

1 code implementation22 Oct 2020 Woosung Choi, Minseok Kim, Jaehwa Chung, Soonyoung Jung

Recent deep-learning approaches have shown that Frequency Transformation (FT) blocks can significantly improve spectrogram-based single-source separation models by capturing frequency patterns.

Music Source Separation

Cannot find the paper you are looking for? You can Submit a new open access paper.