Search Results for author: Nikolaos Ellinas

Found 22 papers, 3 papers with code

Controllable speech synthesis by learning discrete phoneme-level prosodic representations

no code implementations29 Nov 2022 Nikolaos Ellinas, Myrsini Christidou, Alexandra Vioni, June Sig Sung, Aimilios Chalamandaris, Pirros Tsiakoulis, Paris Mastorocostas

The final model enables fine-grained phoneme-level prosody control for all speakers contained in the training set, while maintaining the speaker identity.

Clustering Speech Synthesis

Predicting phoneme-level prosody latents using AR and flow-based Prior Networks for expressive speech synthesis

no code implementations2 Nov 2022 Konstantinos Klapsas, Karolos Nikitaras, Nikolaos Ellinas, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

A large part of the expressive speech synthesis literature focuses on learning prosodic representations of the speech signal which are then modeled by a prior distribution during inference.

Expressive Speech Synthesis

Learning utterance-level representations through token-level acoustic latents prediction for Expressive Speech Synthesis

no code implementations1 Nov 2022 Karolos Nikitaras, Konstantinos Klapsas, Nikolaos Ellinas, Georgia Maniati, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

We show that the fine-grained latent space also captures coarse-grained information, which is more evident as the dimension of latent space increases in order to capture diverse prosodic representations.

Disentanglement Expressive Speech Synthesis

Cross-lingual Text-To-Speech with Flow-based Voice Conversion for Improved Pronunciation

no code implementations31 Oct 2022 Nikolaos Ellinas, Georgios Vamvoukakis, Konstantinos Markopoulos, Georgia Maniati, Panos Kakoulidis, June Sig Sung, Inchul Hwang, Spyros Raptis, Aimilios Chalamandaris, Pirros Tsiakoulis

When used in a cross-lingual setting, acoustic features are initially produced with a native speaker of the target language and then voice conversion is applied by the same model in order to convert these features to the target speaker's voice.

Decoder Disentanglement +1

Fine-grained Noise Control for Multispeaker Speech Synthesis

no code implementations11 Apr 2022 Karolos Nikitaras, Georgios Vamvoukakis, Nikolaos Ellinas, Konstantinos Klapsas, Konstantinos Markopoulos, Spyros Raptis, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, Pirros Tsiakoulis

A text-to-speech (TTS) model typically factorizes speech attributes such as content, speaker and prosody into disentangled representations. Recent works aim to additionally model the acoustic conditions explicitly, in order to disentangle the primary speech factors, i. e. linguistic content, prosody and timbre from any residual factors, such as recording conditions and background noise. This paper proposes unsupervised, interpretable and fine-grained noise and prosody modeling.

Expressive Speech Synthesis

Karaoker: Alignment-free singing voice synthesis with speech training data

no code implementations8 Apr 2022 Panos Kakoulidis, Nikolaos Ellinas, Georgios Vamvoukakis, Konstantinos Markopoulos, June Sig Sung, Gunu Jho, Pirros Tsiakoulis, Aimilios Chalamandaris

Existing singing voice synthesis models (SVS) are usually trained on singing data and depend on either error-prone time-alignment and duration features or explicit music score information.

Singing Voice Synthesis Speaker Identification

Word-Level Style Control for Expressive, Non-attentive Speech Synthesis

no code implementations19 Nov 2021 Konstantinos Klapsas, Nikolaos Ellinas, June Sig Sung, Hyoungmin Park, Spyros Raptis

This paper presents an expressive speech synthesis architecture for modeling and controlling the speaking style at a word level.

Expressive Speech Synthesis

Cross-lingual Low Resource Speaker Adaptation Using Phonological Features

no code implementations17 Nov 2021 Georgia Maniati, Nikolaos Ellinas, Konstantinos Markopoulos, Georgios Vamvoukakis, June Sig Sung, Hyoungmin Park, Aimilios Chalamandaris, Pirros Tsiakoulis

Subsequently, we fine-tune the model with very limited data of a new speaker's voice in either a seen or an unseen language, and achieve synthetic speech of equal quality, while preserving the target speaker's identity.

Speech Synthesis

Unsupervised low-rank representations for speech emotion recognition

no code implementations14 Apr 2021 Georgios Paraskevopoulos, Efthymios Tzinis, Nikolaos Ellinas, Theodoros Giannakopoulos, Alexandros Potamianos

We examine the use of linear and non-linear dimensionality reduction algorithms for extracting low-rank feature representations for speech emotion recognition.

Dimensionality Reduction General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.