Search Results for author: Brooke Stephenson

Found 3 papers, 0 papers with code

BERT, can HE predict contrastive focus? Predicting and controlling prominence in neural TTS using a language model

no code implementations4 Jul 2022 Brooke Stephenson, Laurent Besacier, Laurent Girin, Thomas Hueber

We collect a corpus of utterances containing contrastive focus and we evaluate the accuracy of a BERT model, finetuned to predict quantized acoustic prominence features, on these samples.

Language Modelling Speech Synthesis +1

What the Future Brings: Investigating the Impact of Lookahead for Incremental Neural TTS

no code implementations4 Sep 2020 Brooke Stephenson, Laurent Besacier, Laurent Girin, Thomas Hueber

In this paper, we study the behavior of a neural sequence-to-sequence TTS system when used in an incremental mode, i. e. when generating speech output for token n, the system has access to n + k tokens from the text sequence.

Decoder Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.