Search Results for author: Junghyun Koo

Found 10 papers, 3 papers with code

DDD: A Perceptually Superior Low-Response-Time DNN-based Declipper

1 code implementation8 Jan 2024 Jayeon Yi, Junghyun Koo, Kyogu Lee

Clipping is a common nonlinear distortion that occurs whenever the input or output of an audio system exceeds the supported range.

Exploiting Time-Frequency Conformers for Music Audio Enhancement

no code implementations24 Aug 2023 Yunkee Chae, Junghyun Koo, Sungho Lee, Kyogu Lee

With the proliferation of video platforms on the internet, recording musical performances by mobile devices has become commonplace.

Speech Enhancement

Self-refining of Pseudo Labels for Music Source Separation with Noisy Labeled Data

no code implementations24 Jul 2023 Junghyun Koo, Yunkee Chae, Chang-Bin Jeon, Kyogu Lee

Music source separation (MSS) faces challenges due to the limited availability of correctly-labeled individual instrument tracks.

Instrument Recognition Music Source Separation

Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects

1 code implementation4 Nov 2022 Junghyun Koo, Marco A. Martínez-Ramírez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee, Yuki Mitsufuji

We propose an end-to-end music mixing style transfer system that converts the mixing style of an input multitrack to that of a reference song.

Contrastive Learning Disentanglement +2

End-to-end Music Remastering System Using Self-supervised and Adversarial Training

1 code implementation17 Feb 2022 Junghyun Koo, Seungryeol Paik, Kyogu Lee

Mastering is an essential step in music production, but it is also a challenging task that has to go through the hands of experienced audio engineers, where they adjust tone, space, and volume of a song.

Reverb Conversion of Mixed Vocal Tracks Using an End-to-end Convolutional Deep Neural Network

no code implementations3 Mar 2021 Junghyun Koo, Seungryeol Paik, Kyogu Lee

This method enables us to apply the reverb of the reference track to the source track to which the effect is desired.

Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition

no code implementations9 Sep 2020 Junghyun Koo, Jie Hwan Lee, Jaewoo Pyo, Yujin Jo, Kyogu Lee

In this work, we exploit various multi-modal features extracted from pre-trained networks to recognize Alzheimer's Dementia using a neural network, with a small dataset provided by the ADReSS Challenge at INTERSPEECH 2020.

regression

Disentangling Timbre and Singing Style with Multi-singer Singing Synthesis System

no code implementations29 Oct 2019 Juheon Lee, Hyeong-Seok Choi, Junghyun Koo, Kyogu Lee

In this study, we define the identity of the singer with two independent concepts - timbre and singing style - and propose a multi-singer singing synthesis system that can model them separately.

Sound Audio and Speech Processing

Adversarially Trained End-to-end Korean Singing Voice Synthesis System

no code implementations6 Aug 2019 Juheon Lee, Hyeong-Seok Choi, Chang-Bin Jeon, Junghyun Koo, Kyogu Lee

In this paper, we propose an end-to-end Korean singing voice synthesis system from lyrics and a symbolic melody using the following three novel approaches: 1) phonetic enhancement masking, 2) local conditioning of text and pitch to the super-resolution network, and 3) conditional adversarial training.

Sound Audio and Speech Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.