Search Results for author: Taejun Bak

Found 3 papers, 2 papers with code

Avocodo: Generative Adversarial Network for Artifact-free Vocoder

2 code implementations27 Jun 2022 Taejun Bak, Junmo Lee, Hanbin Bae, Jinhyeok Yang, Jae-Sung Bae, Young-Sun Joo

Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts.

Generative Adversarial Network

GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis

no code implementations29 Jun 2021 Jinhyeok Yang, Jae-Sung Bae, Taejun Bak, Youngik Kim, Hoon-Young Cho

Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data.

Speech Synthesis Vocal Bursts Intensity Prediction

FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

1 code implementation29 Jun 2021 Taejun Bak, Jae-Sung Bae, Hanbin Bae, Young-Ik Kim, Hoon-Young Cho

Methods for modeling and controlling prosody with acoustic features have been proposed for neural text-to-speech (TTS) models.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.