Search Results for author: Woosung Choi

Found 11 papers, 7 papers with code

Timbre-Trap: A Low-Resource Framework for Instrument-Agnostic Music Transcription

no code implementations27 Sep 2023 Frank Cwitkowitz, Kin Wai Cheuk, Woosung Choi, Marco A. Martínez-Ramírez, Keisuke Toyama, Wei-Hsiang Liao, Yuki Mitsufuji

Several works have explored multi-instrument transcription as a means to bolster the performance of models on low-resource tasks, but these methods face the same data availability issues.

Music Transcription

Neural Vocoder is All You Need for Speech Super-resolution

1 code implementation28 Mar 2022 Haohe Liu, Woosung Choi, Xubo Liu, Qiuqiang Kong, Qiao Tian, DeLiang Wang

In this paper, we propose a neural vocoder based speech super-resolution method (NVSR) that can handle a variety of input resolution and upsampling ratios.

Audio Super-Resolution Bandwidth Extension +1

Learning source-aware representations of music in a discrete latent space

no code implementations26 Nov 2021 Jinsung Kim, Yeong-Seok Jeong, Woosung Choi, Jaehwa Chung, Soonyoung Jung

To address this issue, we propose a novel method to learn source-awarelatent representations of music through Vector-Quantized Variational Auto-Encoder(VQ-VAE). We train our VQ-VAE to encode an input mixture into a tensor of integers in a discrete latentspace, and design them to have a decomposed structure which allows humans to manipulatethe latent vector in a source-aware manner.

LightSAFT: Lightweight Latent Source Aware Frequency Transform for Source Separation

no code implementations24 Nov 2021 Yeong-Seok Jeong, Jinsung Kim, Woosung Choi, Jaehwa Chung, Soonyoung Jung

Conditioned source separations have attracted significant attention because of their flexibility, applicability and extensionality.

KUIELab-MDX-Net: A Two-Stream Neural Network for Music Demixing

1 code implementation24 Nov 2021 Minseok Kim, Woosung Choi, Jaehwa Chung, Daewon Lee, Soonyoung Jung

This paper proposes a two-stream neural network for music demixing, called KUIELab-MDX-Net, which shows a good balance of performance and required resources.

Music Source Separation Vocal Bursts Valence Prediction

Music Demixing Challenge 2021

1 code implementation31 Aug 2021 Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Défossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk

The main differences compared with the past challenges are 1) the competition is designed to more easily allow machine learning practitioners from other disciplines to participate, 2) evaluation is done on a hidden test set created by music professionals dedicated exclusively to the challenge to assure the transparency of the challenge, i. e., the test set is not accessible from anyone except the challenge organizers, and 3) the dataset provides a wider range of music genres and involved a greater number of mixing engineers.

Music Source Separation

AMSS-Net: Audio Manipulation on User-Specified Sources with Textual Queries

1 code implementation28 Apr 2021 Woosung Choi, Minseok Kim, Marco A. Martínez Ramírez, Jaehwa Chung, Soonyoung Jung

This paper proposes a neural network that performs audio transformations to user-specified sources (e. g., vocals) of a given audio track according to a given description while preserving other sources not mentioned in the description.

LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation

1 code implementation22 Oct 2020 Woosung Choi, Minseok Kim, Jaehwa Chung, Soonyoung Jung

Recent deep-learning approaches have shown that Frequency Transformation (FT) blocks can significantly improve spectrogram-based single-source separation models by capturing frequency patterns.

Music Source Separation

Cannot find the paper you are looking for? You can Submit a new open access paper.