Search Results for author: Jae-Sung Bae

Found 9 papers, 2 papers with code

Avocodo: Generative Adversarial Network for Artifact-free Vocoder

2 code implementations27 Jun 2022 Taejun Bak, Junmo Lee, Hanbin Bae, Jinhyeok Yang, Jae-Sung Bae, Young-Sun Joo

Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts.

Generative Adversarial Network

FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

1 code implementation29 Jun 2021 Taejun Bak, Jae-Sung Bae, Hanbin Bae, Young-Ik Kim, Hoon-Young Cho

Methods for modeling and controlling prosody with acoustic features have been proposed for neural text-to-speech (TTS) models.

Speech Synthesis

A Neural Text-to-Speech Model Utilizing Broadcast Data Mixed with Background Music

no code implementations4 Mar 2021 Hanbin Bae, Jae-Sung Bae, Young-Sun Joo, Young-Ik Kim, Hoon-Young Cho

Second, the GST-TTS model with an auxiliary quality classifier is trained with the filtered speech and a small amount of clean speech.

GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis

no code implementations29 Jun 2021 Jinhyeok Yang, Jae-Sung Bae, Taejun Bak, Youngik Kim, Hoon-Young Cho

Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data.

Speech Synthesis Vocal Bursts Intensity Prediction

Hierarchical Context-Aware Transformers for Non-Autoregressive Text to Speech

no code implementations29 Jun 2021 Jae-Sung Bae, Tae-Jun Bak, Young-Sun Joo, Hoon-Young Cho

Therefore, to improve the modeling performance of the TNA-TTS model we propose a hierarchical Transformer structure-based text encoder and audio decoder that are designed to accommodate the characteristics of each module.

Sentence

Into-TTS : Intonation Template Based Prosody Control System

no code implementations4 Apr 2022 JIhwan Lee, Joun Yeop Lee, Heejin Choi, Seongkyu Mun, Sangjun Park, Jae-Sung Bae, Chanwoo Kim

Two proposed modules are added to the end-to-end TTS framework: an intonation predictor and an intonation encoder.

Language Modelling

Hierarchical and Multi-Scale Variational Autoencoder for Diverse and Natural Non-Autoregressive Text-to-Speech

no code implementations8 Apr 2022 Jae-Sung Bae, Jinhyeok Yang, Tae-Jun Bak, Young-Sun Joo

This paper proposes a hierarchical and multi-scale variational autoencoder-based non-autoregressive text-to-speech model (HiMuV-TTS) to generate natural speech with diverse speaking styles.

An Empirical Study on L2 Accents of Cross-lingual Text-to-Speech Systems via Vowel Space

no code implementations6 Nov 2022 JIhwan Lee, Jae-Sung Bae, Seongkyu Mun, Heejin Choi, Joun Yeop Lee, Hoon-Young Cho, Chanwoo Kim

With the recent developments in cross-lingual Text-to-Speech (TTS) systems, L2 (second-language, or foreign) accent problems arise.

Latent Filling: Latent Space Data Augmentation for Zero-shot Speech Synthesis

no code implementations5 Oct 2023 Jae-Sung Bae, Joun Yeop Lee, Ji-Hyun Lee, Seongkyu Mun, Taehwa Kang, Hoon-Young Cho, Chanwoo Kim

Previous works in zero-shot text-to-speech (ZS-TTS) have attempted to enhance its systems by enlarging the training data through crowd-sourcing or augmenting existing speech data.

Data Augmentation Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.