Search Results for author: Jingbei Li

Found 5 papers, 1 papers with code

Enhancing Speaking Styles in Conversational Text-to-Speech Synthesis with Graph-based Multi-modal Context Modeling

2 code implementations11 Jun 2021 Jingbei Li, Yi Meng, Chenyi Li, Zhiyong Wu, Helen Meng, Chao Weng, Dan Su

However, state-of-the-art context modeling methods in conversational TTS only model the textual information in context with a recurrent neural network (RNN).

Speech Synthesis Text-To-Speech Synthesis

Towards Multi-Scale Style Control for Expressive Speech Synthesis

no code implementations8 Apr 2021 Xiang Li, Changhe Song, Jingbei Li, Zhiyong Wu, Jia Jia, Helen Meng

This paper introduces a multi-scale speech style modeling method for end-to-end expressive speech synthesis.

Expressive Speech Synthesis Style Transfer

Adversarially learning disentangled speech representations for robust multi-factor voice conversion

no code implementations30 Jan 2021 Jie Wang, Jingbei Li, Xintao Zhao, Zhiyong Wu, Shiyin Kang, Helen Meng

To increase the robustness of highly controllable style transfer on multiple factors in VC, we propose a disentangled speech representation learning framework based on adversarial learning.

Representation Learning Style Transfer +1

Syntactic representation learning for neural network based TTS with syntactic parse tree traversal

no code implementations13 Dec 2020 Changhe Song, Jingbei Li, Yixuan Zhou, Zhiyong Wu, Helen Meng

Meanwhile, nuclear-norm maximization loss is introduced to enhance the discriminability and diversity of the embeddings of constituent labels.

Representation Learning Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.