no code implementations • COLING 2016 • Guillaume Serri{\`e}re, Christophe Cerisara, Dominique Fohr, Odile Mella
This work proposes a new confidence measure for evaluating text-to-speech alignment systems outputs, which is a key component for many applications, such as semi-automatic corpus anonymization, lips syncing, film dubbing, corpus preparation for speech synthesis and speech recognition acoustic models training.
no code implementations • 13 Jul 2017 • Hoa T. Le, Christophe Cerisara, Alexandre Denis
We study in this work the importance of depth in convolutional models for text classification, either when character or word inputs are considered.
1 code implementation • COLING 2018 • Christophe Cerisara, Somayeh Jafaritazehjani, Adedayo Oluokun, Hoa Le
Both the annotated corpus and deep network are released with an open-source license.
no code implementations • 11 Apr 2019 • Jiří Martínek, Pavel Král, Ladislav Lenc, Christophe Cerisara
Two multi-lingual models are proposed for this task.
no code implementations • 25 Sep 2019 • Christophe Cerisara
Most unsupervised neural networks training methods concern generative models, deep clustering, pretraining or some form of representation learning.
no code implementations • 11 Dec 2019 • Hoa T. Le, Christophe Cerisara, Claire Gardent
Work on summarization has explored both reinforcement learning (RL) optimization using ROUGE as a reward and syntax-aware models, such as models those input is enriched with part-of-speech (POS)-tags and dependency information.
no code implementations • 19 May 2020 • Jiří Martínek, Christophe Cerisara, Pavel Král, Ladislav Lenc
In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations.
no code implementations • 22 Oct 2020 • Christophe Cerisara, Pavel Kral, Ladislav Lenc
The performance of the proposed approach is consistent across these languages and it is comparable to the state-of-the-art results in English.
no code implementations • 11 Apr 2021 • Alaaeddine Chaoub, Alexandre Voisin, Christophe Cerisara, Benoît Iung
In this work, we propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
1 code implementation • DCASE workshop 2021 • F ́elix Gontier, Romain Serizel, Christophe Cerisara
utomated audio captioning is the multimodal task of describing environmental audio recordings with fluent natural language.
Ranked #7 on Audio captioning on AudioCaps
1 code implementation • 30 Jan 2024 • Gaspard Michel, Elena V. Epure, Romain Hennequin, Christophe Cerisara
Recent approaches to automatically detect the speaker of an utterance of direct speech often disregard general information about characters in favor of local information found in the context, such as surrounding mentions of entities.
no code implementations • ACL 2022 • Guillaume Le Berre, Christophe Cerisara, Philippe Langlais, Guy Lapalme
Pre-trained models have shown very good performances on a number of question answering benchmarks especially when fine-tuned on multiple question answering datasets at once.