Search Results for author: Kyungguen Byun

Found 3 papers, 0 papers with code

Stylebook: Content-Dependent Speaking Style Modeling for Any-to-Any Voice Conversion using Only Speech Data

no code implementations6 Sep 2023 Hyungseob Lim, Kyungguen Byun, Sunkuk Moon, Erik Visser

Finally, content information extracted from the source speech and content-dependent target style embeddings are fed into a diffusion-based decoder to generate the converted speech mel-spectrogram.

Self-Supervised Learning Voice Conversion

ExcitNet vocoder: A neural excitation model for parametric speech synthesis systems

no code implementations9 Nov 2018 Eunwoo Song, Kyungguen Byun, Hong-Goo Kang

Conventional WaveNet-based neural vocoding systems significantly improve the perceptual quality of synthesized speech by statistically generating a time sequence of speech waveforms through an auto-regressive framework.

Speech Synthesis

Speaker-adaptive neural vocoders for parametric speech synthesis systems

no code implementations8 Nov 2018 Eunwoo Song, Jin-Seob Kim, Kyungguen Byun, Hong-Goo Kang

To generate more natural speech signals with the constraint of limited training data, we propose a speaker adaptation task with an effective variation of neural vocoding models.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.