Search Results for author: Olivier Galibert

Found 24 papers, 3 papers with code

Analyzing BERT Cross-lingual Transfer Capabilities in Continual Sequence Labeling

1 code implementation MMMPIE (COLING) 2022 Juan Manuel Coria, Mathilde Veron, Sahar Ghannay, Guillaume Bernard, Hervé Bredin, Olivier Galibert, Sophie Rosset

Knowledge transfer between neural language models is a widely used technique that has proven to improve performance in a multitude of natural language tasks, in particular with the recent rise of large pre-trained language models like BERT.

Continual Learning Cross-Lingual Transfer +6

A Textless Metric for Speech-to-Speech Comparison

1 code implementation21 Oct 2022 Laurent Besacier, Swen Ribeiro, Olivier Galibert, Ioan Calapodescu

In this paper, we introduce a new and simple method for comparing speech utterances without relying on text transcripts.

Sentence Speech-to-Speech Translation +1

Evaluate On-the-job Learning Dialogue Systems and a Case Study for Natural Language Understanding

no code implementations26 Feb 2021 Mathilde Veron, Sophie Rosset, Olivier Galibert, Guillaume Bernard

On-the-job learning consists in continuously learning while being used in production, in an open environment, meaning that the system has to deal on its own with situations and elements never seen before.

Natural Language Understanding

\'Evaluation de syst\`emes apprenant tout au long de la vie (Evaluation of lifelong learning systems )

no code implementations JEPTALNRECITAL 2020 Yevhenii Prokopalo, Sylvain Meignier, Olivier Galibert, Lo{\"\i}c Barrault, Anthony Larcher

Une adaptation de leur mod{\`e}le par des experts en apprentissage automatique est possible mais tr{\`e}s co{\^u}teuse alors que les soci{\'e}t{\'e}s utilisant ces syst{\`e}mes disposent d{'}experts du domaine qui pourraient accompagner ces syst{\`e}mes dans un apprentissage tout au long de la vie.

Evaluation of Lifelong Learning Systems

no code implementations LREC 2020 Yevhenii Prokopalo, Sylvain Meignier, Olivier Galibert, Loic Barrault, Anthony Larcher

Current intelligent systems need the expensive support of machine learning experts to sustain their performance level when used on a daily basis.

BIG-bench Machine Learning

Analyzing Learned Representations of a Deep ASR Performance Prediction Model

no code implementations WS 2018 Zied Elloumi, Laurent Besacier, Olivier Galibert, Benjamin Lecouteux

In a previous paper, we presented an ASR performance prediction system using CNNs that encode both text (ASR transcript) and speech, in order to predict word error rate.

Multi-Task Learning TAG

Estimation de la qualit\'e d'un syst\`eme de reconnaissance de la parole pour une t\^ache de compr\'ehension (Quality estimation of a Speech Recognition System for a Spoken Language Understanding task)

no code implementations JEPTALNRECITAL 2016 Olivier Galibert, Nathalie Camelin, Paul Del{\'e}glise, Sophie Rosset

Nous comparons ici diff{\'e}rentes m{\'e}triques, notamment le WER, NE-WER et ATENE m{\'e}trique propos{\'e}e r{\'e}cemment pour l{'}{\'e}valuation des syst{\`e}mes de reconnaissance de la parole {\'e}tant donn{\'e} une t{\^a}che de reconnaissance d{'}entit{\'e}s nomm{\'e}es.

speech-recognition Speech Recognition +1

Comparaison de listes d'erreurs de transcription automatique de la parole : quelle compl\'ementarit\'e entre les diff\'erentes m\'etriques ? (Comparing error lists for ASR systems : contribution of different metrics)

no code implementations JEPTALNRECITAL 2016 Olivier Galibert, Juliette Kahn, Sophie Rosset

Le travail que nous pr{\'e}sentons ici s{'}inscrit dans le domaine de l{'}{\'e}valuation des syst{\`e}mes de reconnaissance automatique de la parole en vue de leur utilisation dans une t{\^a}che aval, ici la reconnaissance des entit{\'e}s nomm{\'e}es.

LNE-Visu : a tool to explore and visualize multimedia data

no code implementations JEPTALNRECITAL 2016 Guillaume Bernard, Juliette Kahn, Olivier Galibert, R{\'e}mi Regnier, S{\'e}verine Demeyer

LNE-Visu : a tool to explore and visualize multimedia data LNE-Visu is a tool to explore and visualize multimedia data created for the LNE evaluation campaigns.

Visu

Generating Task-Pertinent sorted Error Lists for Speech Recognition

no code implementations LREC 2016 Olivier Galibert, Mohamed Ameur Ben Jannet, Juliette Kahn, Sophie Rosset

In the context of Automatic Speech Recognition (ASR) used as a first step towards Named Entity Recognition (NER) in speech, error seriousness is usually determined by their frequency, due to the use of the WER as metric to evaluate the ASR output, despite the emergence of more relevant measures in the literature.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

The ETAPE speech processing evaluation

no code implementations LREC 2014 Olivier Galibert, Jeremy Leixa, Gilles Adda, Khalid Choukri, Guillaume Gravier

The ETAPE evaluation is the third evaluation in automatic speech recognition and associated technologies in a series which started with ESTER.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

ETER : a new metric for the evaluation of hierarchical named entity recognition

no code implementations LREC 2014 Mohamed Ben Jannet, Martine Adda-Decker, Olivier Galibert, Juliette Kahn, Sophie Rosset

We then introduce a new metric, the Entity Tree Error Rate (ETER), to evaluate hierarchical and structured named entity detection, classification and decomposition.

Entity Extraction using GAN General Classification +3

The ETAPE corpus for the evaluation of speech-based TV content processing in the French language

no code implementations LREC 2012 Guillaume Gravier, Gilles Adda, Niklas Paulsson, Matthieu Carr{\'e}, Aude Giraudel, Olivier Galibert

The paper presents a comprehensive overview of existing data for the evaluation of spoken content processing in a multimedia framework for the French language.

Speech Recognition

Analyzing the Impact of Prevalence on the Evaluation of a Manual Annotation Campaign

no code implementations LREC 2012 Kar{\"e}n Fort, Claire Fran{\c{c}}ois, Olivier Galibert, Maha Ghribi

This article details work aiming at evaluating the quality of the manual annotation of gene renaming couples in scientific abstracts, which generates sparse annotations.

Extended Named Entities Annotation on OCRed Documents: From Corpus Constitution to Evaluation Campaign

no code implementations LREC 2012 Olivier Galibert, Sophie Rosset, Cyril Grouin, Pierre Zweigenbaum, Ludovic Quintard

Within the framework of the Quaero project, we proposed a new definition of named entities, based upon an extension of the coverage of named entities as well as the structure of those named entities.

Named Entity Recognition (NER) Optical Character Recognition (OCR)

Cannot find the paper you are looking for? You can Submit a new open access paper.