Deep Audio-Visual Speech Recognition

The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Lipreading LRS2 TM-CTC + extLM Word Error Rate (WER) 54.7% # 15
Lipreading LRS2 TM-seq2seq + extLM Word Error Rate (WER) 48.3% # 10
Audio-Visual Speech Recognition LRS2 TM-CTC Test WER 8.2 # 6
Audio-Visual Speech Recognition LRS2 TM-Seq2seq Test WER 8.5 # 7
Automatic Speech Recognition (ASR) LRS2 TM-CTC Test WER 10.1 # 7
Automatic Speech Recognition (ASR) LRS2 TM-seq2seq Test WER 9.7 # 6
Audio-Visual Speech Recognition LRS3-TED TM-seq2seq Word Error Rate (WER) 7.2 # 7
Lipreading LRS3-TED TM-seq2seq Word Error Rate (WER) 58.9 # 12

Methods