Search Results for author: Inchul Hwang

Found 14 papers, 0 papers with code

Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis

no code implementations2 Nov 2022 Konstantinos Klapsas, Karolos Nikitaras, Nikolaos Ellinas, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

A large part of the expressive speech synthesis literature focuses on learning prosodic representations of the speech signal which are then modeled by a prior distribution during inference.

Expressive Speech Synthesis

Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis

no code implementations1 Nov 2022 Karolos Nikitaras, Konstantinos Klapsas, Nikolaos Ellinas, Georgia Maniati, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

We show that the fine-grained latent space also captures coarse-grained information, which is more evident as the dimension of latent space increases in order to capture diverse prosodic representations.

Disentanglement Expressive Speech Synthesis

Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation

no code implementations31 Oct 2022 Nikolaos Ellinas, Georgios Vamvoukakis, Konstantinos Markopoulos, Georgia Maniati, Panos Kakoulidis, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

When used in a cross-lingual setting, acoustic features are initially produced with a native speaker of the target language and then voice conversion is applied by the same model in order to convert these features to the target speaker's voice.

Disentanglement Voice Conversion

Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation

no code implementations29 Dec 2020 Hyojung Han, Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang

The current re-translation approaches are based on autoregressive sequence generation models (ReTA), which generate tar-get tokens in the (partial) translation sequentially.

Machine Translation TAR +1

Ensemble-Based Deep Reinforcement Learning for Chatbots

no code implementations27 Aug 2019 Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, Jihie Kim

Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent.

Chatbot Clustering +4

Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards

no code implementations27 Aug 2019 Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Sungja Choi, Inchul Hwang, Jihie Kim

Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function.

reinforcement-learning Reinforcement Learning (RL) +3

Self-Learning Architecture for Natural Language Generation

no code implementations WS 2018 Hyungtak Choi, Siddarth K.M., Haehun Yang, Heesik Jeon, Inchul Hwang, Jihie Kim

In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants.

Self-Learning Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.