Expressive Speech Synthesis
13 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Expressive Speech Synthesis
Most implemented papers
Exploring Transfer Learning for Low Resource Emotional TTS
During the last few years, spoken language technologies have known a big improvement thanks to Deep Learning.
Enhancing Suno's Bark Text-to-Speech Model: Addressing Limitations Through Meta's Encodec and Pre-Trained Hubert
Keywords: Bark, ai voice cloning, Suno, text-to-speech, artificial intelligence, audio generation, Meta's encodec, audio codebooks, semantic tokens, HuBert, transformer-based model, multilingual speech, wav2vec, linear projection head, embedding space, generative capabilities, pretrained model checkpoints
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody.
Robust and fine-grained prosody control of end-to-end speech synthesis
We propose prosody embeddings for emotional and expressive speech synthesis networks.
Visualization and Interpretation of Latent Spaces for Controlling Expressive Speech Synthesis through Audio Analysis
The field of Text-to-Speech has experienced huge improvements last years benefiting from deep learning techniques.
Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis
Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods.
Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Despite the growing interest for expressive speech synthesis, synthesis of nonverbal expressions is an under-explored area.
Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech
In expressive speech synthesis, there are high requirements for emotion interpretation.
EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative storytelling in games, television and graphic novels
The increasing adoption of text-to-speech technologies has led to a growing demand for natural and emotive voices that adapt to a conversation's context and emotional tone.
SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer
Expressive speech synthesis models are trained by adding corpora with diverse speakers, various emotions, and different speaking styles to the dataset, in order to control various characteristics of speech and generate the desired voice.