Search Results for author: Jae-Min Kim

Found 9 papers, 3 papers with code

Probability density distillation with generative adversarial networks for high-quality parallel waveform generation

1 code implementation9 Apr 2019 Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim

As this process encourages the student to model the distribution of realistic speech waveform, the perceptual quality of the synthesized speech becomes much more natural.

Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems

1 code implementation21 May 2019 Ohsung Kwon, Eunwoo Song, Jae-Min Kim, Hong-Goo Kang

In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method.

Speech Synthesis Text-To-Speech Synthesis

Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram

12 code implementations25 Oct 2019 Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim

We propose Parallel WaveGAN, a distillation-free, fast, and small-footprint waveform generation method using a generative adversarial network.

Generative Adversarial Network Speech Synthesis +1

Parallel waveform synthesis based on generative adversarial networks with voicing-aware conditional discriminators

no code implementations27 Oct 2020 Ryuichi Yamamoto, Eunwoo Song, Min-Jae Hwang, Jae-Min Kim

This paper proposes voicing-aware conditional discriminators for Parallel WaveGAN-based waveform synthesis systems.

Cross-Speaker Emotion Transfer for Low-Resource Text-to-Speech Using Non-Parallel Voice Conversion with Pitch-Shift Data Augmentation

no code implementations21 Apr 2022 Ryo Terashima, Ryuichi Yamamoto, Eunwoo Song, Yuma Shirahata, Hyun-Wook Yoon, Jae-Min Kim, Kentaro Tachibana

Because pitch-shift data augmentation enables the coverage of a variety of pitch dynamics, it greatly stabilizes training for both VC and TTS models, even when only 1, 000 utterances of the target speaker's neutral data are available.

Data Augmentation Voice Conversion

TTS-by-TTS 2: Data-selective augmentation for neural speech synthesis using ranking support vector machine with variational autoencoder

no code implementations30 Jun 2022 Eunwoo Song, Ryuichi Yamamoto, Ohsung Kwon, Chan-Ho Song, Min-Jae Hwang, Suhyeon Oh, Hyun-Wook Yoon, Jin-Seob Kim, Jae-Min Kim

In the proposed method, we first adopt a variational autoencoder whose posterior distribution is utilized to extract latent features representing acoustic similarity between the recorded and synthetic corpora.

Speech Synthesis

Period VITS: Variational Inference with Explicit Pitch Modeling for End-to-end Emotional Speech Synthesis

no code implementations28 Oct 2022 Yuma Shirahata, Ryuichi Yamamoto, Eunwoo Song, Ryo Terashima, Jae-Min Kim, Kentaro Tachibana

From these features, the proposed periodicity generator produces a sample-level sinusoidal source that enables the waveform decoder to accurately reproduce the pitch.

Emotional Speech Synthesis Variational Inference

Cross-Lingual Transfer Learning for Phrase Break Prediction with Multilingual Language Model

no code implementations5 Jun 2023 Hoyeon Lee, Hyun-Wook Yoon, Jong-Hwan Kim, Jae-Min Kim

We investigate the effectiveness of zero-shot and few-shot cross-lingual transfer for phrase break prediction using a pre-trained multilingual language model.

Cross-Lingual Transfer Language Modelling +1

Fast Bilingual Grapheme-To-Phoneme Conversion

no code implementations NAACL (ACL) 2022 Hwa-Yeon Kim, Jong-Hwan Kim, Jae-Min Kim

Autoregressive transformer (ART)-based grapheme-to-phoneme (G2P) models have been proposed for bi/multilingual text-to-speech systems.

Data Augmentation Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.