Search Results for author: Jinhyeok Yang

Found 5 papers, 2 papers with code

Varianceflow: High-Quality and Controllable Text-to-Speech using Variance Information via Normalizing Flow

no code implementations27 Feb 2023 Yoonhyung Lee, Jinhyeok Yang, Kyomin Jung

Also, the objective function of NF makes the model use the variance information and the text in a disentangled manner resulting in more precise variance control.

Avocodo: Generative Adversarial Network for Artifact-free Vocoder

2 code implementations27 Jun 2022 Taejun Bak, Junmo Lee, Hanbin Bae, Jinhyeok Yang, Jae-Sung Bae, Young-Sun Joo

Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts.

Hierarchical and Multi-Scale Variational Autoencoder for Diverse and Natural Non-Autoregressive Text-to-Speech

no code implementations8 Apr 2022 Jae-Sung Bae, Jinhyeok Yang, Tae-Jun Bak, Young-Sun Joo

This paper proposes a hierarchical and multi-scale variational autoencoder-based non-autoregressive text-to-speech model (HiMuV-TTS) to generate natural speech with diverse speaking styles.

GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis

no code implementations29 Jun 2021 Jinhyeok Yang, Jae-Sung Bae, Taejun Bak, Youngik Kim, Hoon-Young Cho

Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data.

Speech Synthesis

VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network

2 code implementations30 Jul 2020 Jinhyeok Yang, Jun-Mo Lee, Youngik Kim, Hoon-Young Cho, Injung Kim

Additionally, compared with Parallel WaveGAN, another recently developed high-fidelity vocoder, VocGAN is 6. 98x faster on a CPU and exhibits higher MOS.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.