Search Results for author: Adriana Fernandez-Lopez

Found 4 papers, 1 papers with code

SparseVSR: Lightweight and Noise Robust Visual Speech Recognition

no code implementations10 Jul 2023 Adriana Fernandez-Lopez, Honglie Chen, Pingchuan Ma, Alexandros Haliassos, Stavros Petridis, Maja Pantic

We evaluate our 50% sparse model on 7 different visual noise types and achieve an overall absolute improvement of more than 2% WER compared to the dense equivalent.

speech-recognition Visual Speech Recognition

Auto-AVSR: Audio-Visual Speech Recognition with Automatic Labels

1 code implementation25 Mar 2023 Pingchuan Ma, Alexandros Haliassos, Adriana Fernandez-Lopez, Honglie Chen, Stavros Petridis, Maja Pantic

Recently, the performance of automatic, visual, and audio-visual speech recognition (ASR, VSR, and AV-ASR, respectively) has been substantially improved, mainly due to the use of larger models and training sets.

Audio-Visual Speech Recognition Automatic Speech Recognition +4

Automatic Viseme Vocabulary Construction to Enhance Continuous Lip-reading

no code implementations26 Apr 2017 Adriana Fernandez-Lopez, Federico M. Sukno

Our results indicate that we are able to recognize approximately 58% of the visemes, 47% of the phonemes and 23% of the words in a continuous speech scenario and that the optimal viseme vocabulary for Spanish is composed by 20 visemes.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Towards Estimating the Upper Bound of Visual-Speech Recognition: The Visual Lip-Reading Feasibility Database

no code implementations26 Apr 2017 Adriana Fernandez-Lopez, Oriol Martinez, Federico M. Sukno

On one hand, researchers have reported that the mapping between phonemes and visemes (visual units) is one-to-many because there are phonemes which are visually similar and indistinguishable between them.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Cannot find the paper you are looking for? You can Submit a new open access paper.