no code implementations • 5 Apr 2022 • Marc-Antoine Georges, Julien Diard, Laurent Girin, Jean-Luc Schwartz, Thomas Hueber
We propose a computational model of speech production combining a pre-trained neural articulatory synthesizer able to reproduce complex speech stimuli from a limited set of interpretable articulatory parameters, a DNN-based internal forward model predicting the sensory consequences of articulatory commands, and an internal inverse model based on a recurrent neural network recovering articulatory commands from the acoustic speech input.
1 code implementation • 28 Aug 2020 • Laurent Girin, Simon Leglaive, Xiaoyu Bie, Julien Diard, Thomas Hueber, Xavier Alameda-Pineda
Recently, a series of papers have presented different extensions of the VAE to process sequential data, which model not only the latent space but also the temporal dependencies within a sequence of data vectors and corresponding latent vectors, relying on recurrent neural networks or state-space models.
no code implementations • JEPTALNRECITAL 2016 • Jean-Fran{\c{c}}ois Patri, Julien Diard, Pascal Perrier
Nous proposons d{'}explorer le r{\^o}le de ces modalit{\'e}s sensorielles dans la planification des gestes de parole {\`a} partir d{'}un mod{\`e}le bay{\'e}sien repr{\'e}sentant la structure des connaissances mises en jeu dans cette t{\^a}che.
no code implementations • JEPTALNRECITAL 2012 • Rapha{\"e}l Laurent, Jean-Luc Schwartz, Pierre Bessi{\`e}re, Julien Diard