Search Results for author: Kayoko Yanagisawa

Found 5 papers, 0 papers with code

Modelling low-resource accents without accent-specific TTS frontend

no code implementations11 Jan 2023 Georgi Tinchev, Marta Czarnowska, Kamil Deja, Kayoko Yanagisawa, Marius Cotescu

Prior work on modelling accents assumes a phonetic transcription is available for the target accent, which might not be the case for low-resource, regional accents.

Voice Conversion

Remap, warp and attend: Non-parallel many-to-many accent conversion with Normalizing Flows

no code implementations10 Nov 2022 Abdelhamid Ezzerg, Thomas Merritt, Kayoko Yanagisawa, Piotr Bilinski, Magdalena Proszewska, Kamil Pokora, Renard Korzeniowski, Roberto Barra-Chicote, Daniel Korzekwa

Regional accents of the same language affect not only how words are pronounced (i. e., phonetic content), but also impact prosodic aspects of speech such as speaking rate and intonation.

Unify and Conquer: How Phonetic Feature Representation Affects Polyglot Text-To-Speech (TTS)

no code implementations4 Jul 2022 Ariadna Sanchez, Alessio Falai, Ziyao Zhang, Orazio Angelini, Kayoko Yanagisawa

In this paper, we conduct a comprehensive study comparing multilingual NTTS systems models trained with both representations.

Mix and Match: An Empirical Study on Training Corpus Composition for Polyglot Text-To-Speech (TTS)

no code implementations4 Jul 2022 Ziyao Zhang, Alessio Falai, Ariadna Sanchez, Orazio Angelini, Kayoko Yanagisawa

Training multilingual Neural Text-To-Speech (NTTS) models using only monolingual corpora has emerged as a popular way for building voice cloning based Polyglot NTTS systems.

Speech Synthesis Voice Cloning

Singing Synthesis: with a little help from my attention

no code implementations12 Dec 2019 Orazio Angelini, Alexis Moinet, Kayoko Yanagisawa, Thomas Drugman

We present UTACO, a singing synthesis model based on an attention-based sequence-to-sequence mechanism and a vocoder based on dilated causal convolutions.

Cannot find the paper you are looking for? You can Submit a new open access paper.