Search Results for author: Yannick Esteve

Found 6 papers, 1 papers with code

Exploring Gaussian mixture model framework for speaker adaptation of deep neural network acoustic models

no code implementations15 Mar 2020 Natalia Tomashenko, Yuri Khokhlov, Yannick Esteve

Experimental results on the TED-LIUM corpus show that the proposed adaptation technique can be effectively integrated into DNN and TDNN setups at different levels and provide additional gain in recognition performance: up to 6% of relative word error rate reduction (WERR) over the strong feature-space adaptation techniques based on maximum likelihood linear regression (fMLLR) speaker adapted DNN baseline, and up to 18% of relative WERR in comparison with a speaker independent (SI) DNN baseline model, trained on conventional features.

regression

Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems

no code implementations14 Feb 2020 Natalia Tomashenko, Christian Raymond, Antoine Caubriere, Renato de Mori, Yannick Esteve

The dialog history is represented in the form of dialog history embedding vectors (so-called h-vectors) and is provided as an additional information to end-to-end SLU models in order to improve the system performance.

slot-filling Slot Filling +1

ON-TRAC Consortium End-to-End Speech Translation Systems for the IWSLT 2019 Shared Task

no code implementations EMNLP (IWSLT) 2019 Ha Nguyen, Natalia Tomashenko, Marcely Zanon Boito, Antoine Caubriere, Fethi Bougares, Mickael Rouvier, Laurent Besacier, Yannick Esteve

This paper describes the ON-TRAC Consortium translation systems developed for the end-to-end model task of IWSLT Evaluation 2019 for the English-to-Portuguese language pair.

Translation

Recent Advances in End-to-End Spoken Language Understanding

no code implementations29 Sep 2019 Natalia Tomashenko, Antoine Caubriere, Yannick Esteve, Antoine Laurent, Emmanuel Morin

This work investigates spoken language understanding (SLU) systems in the scenario when the semantic information is extracted directly from the speech signal by means of a single end-to-end neural network model.

General Classification named-entity-recognition +5

Cannot find the paper you are looking for? You can Submit a new open access paper.