Sub-word Level Lip Reading With Visual Attention

The goal of this paper is to learn strong lip reading models that can recognise speech in silent videos. Most prior works deal with the open-set visual speech recognition problem by adapting existing automatic speech recognition techniques on top of trivially pooled visual features. Instead, in this paper we focus on the unique challenges encountered in lip reading and propose tailored solutions. To this end, we make the following contributions: (1) we propose an attention-based pooling mechanism to aggregate visual speech representations; (2) we use sub-word units for lip reading for the first time and show that this allows us to better model the ambiguities of the task; (3) we propose a model for Visual Speech Detection (VSD), trained on top of the lip reading network. Following the above, we obtain state-of-the-art results on the challenging LRS2 and LRS3 benchmarks when training on public datasets, and even surpass models trained on large-scale industrial datasets by using an order of magnitude less data. Our best model achieves 22.6% word error rate on the LRS2 dataset, a performance unprecedented for lip reading models, significantly reducing the performance gap between lip reading and automatic speech recognition. Moreover, on the AVA-ActiveSpeaker benchmark, our VSD model surpasses all visual-only baselines and even outperforms several recent audio-visual methods.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


 Ranked #1 on Visual Speech Recognition on LRS2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio-Visual Active Speaker Detection AVA-ActiveSpeaker VTP (visual only) validation mean average precision 89.2% # 12
Visual Speech Recognition LRS2 VTP Word Error Rate (WER) 28.9 # 2
Lipreading LRS2 VTP Word Error Rate (WER) 28.9% # 5
Lipreading LRS2 VTP (more data) Word Error Rate (WER) 22.6% # 3
Visual Speech Recognition LRS2 VTP with more data Word Error Rate (WER) 22.6 # 1
Visual Speech Recognition LRS3-TED VTP Word Error Rate (WER) 40.6 # 3
Lipreading LRS3-TED VTP Word Error Rate (WER) 40.6 # 9
Lipreading LRS3-TED VTP with more data Word Error Rate (WER) 30.7 # 6
Visual Speech Recognition LRS3-TED VTP with more data Word Error Rate (WER) 30.7 # 2

Methods