Search Results for author: Salima Mdhaffar

Found 13 papers, 2 papers with code

Federated Learning for ASR based on Wav2vec 2.0

2 code implementations20 Feb 2023 Tuan Nguyen, Salima Mdhaffar, Natalia Tomashenko, Jean-François Bonastre, Yannick Estève

This paper presents a study on the use of federated learning to train an ASR model based on a wav2vec 2. 0 model pre-trained by self supervision.

Federated Learning Language Modelling

Retrieving Speaker Information from Personalized Acoustic Models for Speech Recognition

no code implementations7 Nov 2021 Salima Mdhaffar, Jean-François Bonastre, Marc Tommasi, Natalia Tomashenko, Yannick Estève

The widespread of powerful personal devices capable of collecting voice of their users has opened the opportunity to build speaker adapted speech recognition system (ASR) or to participate to collaborative learning of ASR.

Speaker Verification speech-recognition +1

Privacy attacks for automatic speech recognition acoustic models in a federated learning framework

no code implementations6 Nov 2021 Natalia Tomashenko, Salima Mdhaffar, Marc Tommasi, Yannick Estève, Jean-François Bonastre

This paper investigates methods to effectively retrieve speaker information from the personalized speaker adapted neural network acoustic models (AMs) in automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Where are we in semantic concept extraction for Spoken Language Understanding?

no code implementations24 Jun 2021 Sahar Ghannay, Antoine Caubrière, Salima Mdhaffar, Gaëlle Laperrière, Bassam Jabaian, Yannick Estève

More recent works on self-supervised training with unlabeled data open new perspectives in term of performance for automatic speech recognition and natural language processing.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

A Multimodal Educational Corpus of Oral Courses: Annotation, Analysis and Case Study

no code implementations LREC 2020 Salima Mdhaffar, Yannick Est{\`e}ve, Antoine Laurent, Hern, Nicolas ez, Richard Dufour, Delphine Charlet, Geraldine Damnati, Solen Quiniou, Nathalie Camelin

The use cases concern scientific fields from both speech and text processing, with language model adaptation, thematic segmentation and transcription to slide alignment.

Language Modelling

Apport de l'adaptation automatique des mod\`eles de langage pour la reconnaissance de la parole: \'evaluation qualitative extrins\`eque dans un contexte de traitement de cours magistraux (Contribution of automatic adaptation of language models for speech recognition : extrinsic qualitative evaluation in a context of educational courses)

no code implementations JEPTALNRECITAL 2019 Salima Mdhaffar, Yannick Est{\`e}ve, Hern, Nicolas ez, Antoine Laurent, Solen Quiniou

Les transcriptions automatiques de ces syst{\`e}mes sont de plus en plus exploitables et utilis{\'e}es dans des syst{\`e}mes complexes de traitement automatique du langage naturel, par exemple pour la traduction automatique, l{'}indexation, la recherche documentaire... Des {\'e}tudes r{\'e}centes ont propos{\'e} des m{\'e}triques permettant de comparer la qualit{\'e} des transcriptions automatiques de diff{\'e}rents syst{\`e}mes en fonction de la t{\^a}che vis{\'e}e. Dans cette {\'e}tude nous souhaitons mesurer, qualitativement, l{'}apport de l{'}adaptation automatique des mod{\`e}les de langage au domaine vis{\'e} par un cours magistral.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.