Search Results for author: Salima Mdhaffar

Found 16 papers, 3 papers with code

Performance Analysis of Speech Encoders for Low-Resource SLU and ASR in Tunisian Dialect

1 code implementation5 Jul 2024 Salima Mdhaffar, Haroun Elleuch, Fethi Bougares, Yannick Estève

In contrast to existing research, this paper contributes by comparing the effectiveness of SSL approaches in the context of (i) the low-resource spoken Tunisian Arabic dialect and (ii) its combination with a low-resource SLU and ASR scenario, where only a few semantic annotations are available for fine-tuning.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Sonos Voice Control Bias Assessment Dataset: A Methodology for Demographic Bias Assessment in Voice Assistants

no code implementations14 May 2024 Chloé Sekkat, Fanny Leroy, Salima Mdhaffar, Blake Perry Smith, Yannick Estève, Joseph Dureau, Alice Coucke

Recent works demonstrate that voice assistants do not perform equally well for everyone, but research on demographic robustness of speech technologies is still scarce.

Automatic Speech Recognition Diversity +3

Federated Learning for ASR based on Wav2vec 2.0

2 code implementations20 Feb 2023 Tuan Nguyen, Salima Mdhaffar, Natalia Tomashenko, Jean-François Bonastre, Yannick Estève

This paper presents a study on the use of federated learning to train an ASR model based on a wav2vec 2. 0 model pre-trained by self supervision.

Federated Learning Language Modeling +1

Retrieving Speaker Information from Personalized Acoustic Models for Speech Recognition

no code implementations7 Nov 2021 Salima Mdhaffar, Jean-François Bonastre, Marc Tommasi, Natalia Tomashenko, Yannick Estève

The widespread of powerful personal devices capable of collecting voice of their users has opened the opportunity to build speaker adapted speech recognition system (ASR) or to participate to collaborative learning of ASR.

Speaker Verification speech-recognition +1

Privacy attacks for automatic speech recognition acoustic models in a federated learning framework

no code implementations6 Nov 2021 Natalia Tomashenko, Salima Mdhaffar, Marc Tommasi, Yannick Estève, Jean-François Bonastre

This paper investigates methods to effectively retrieve speaker information from the personalized speaker adapted neural network acoustic models (AMs) in automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Where are we in semantic concept extraction for Spoken Language Understanding?

no code implementations24 Jun 2021 Sahar Ghannay, Antoine Caubrière, Salima Mdhaffar, Gaëlle Laperrière, Bassam Jabaian, Yannick Estève

More recent works on self-supervised training with unlabeled data open new perspectives in term of performance for automatic speech recognition and natural language processing.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Apport de l'adaptation automatique des mod\`eles de langage pour la reconnaissance de la parole: \'evaluation qualitative extrins\`eque dans un contexte de traitement de cours magistraux (Contribution of automatic adaptation of language models for speech recognition : extrinsic qualitative evaluation in a context of educational courses)

no code implementations JEPTALNRECITAL 2019 Salima Mdhaffar, Yannick Est{\`e}ve, Hern, Nicolas ez, Antoine Laurent, Solen Quiniou

Les transcriptions automatiques de ces syst{\`e}mes sont de plus en plus exploitables et utilis{\'e}es dans des syst{\`e}mes complexes de traitement automatique du langage naturel, par exemple pour la traduction automatique, l{'}indexation, la recherche documentaire... Des {\'e}tudes r{\'e}centes ont propos{\'e} des m{\'e}triques permettant de comparer la qualit{\'e} des transcriptions automatiques de diff{\'e}rents syst{\`e}mes en fonction de la t{\^a}che vis{\'e}e. Dans cette {\'e}tude nous souhaitons mesurer, qualitativement, l{'}apport de l{'}adaptation automatique des mod{\`e}les de langage au domaine vis{\'e} par un cours magistral.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.