ASR is all you need: cross-modal distillation for lip reading

28 Nov 2019  ·  Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman ·

The goal of this work is to train strong models for visual speech recognition without requiring human annotated ground truth data. We achieve this by distilling from an Automatic Speech Recognition (ASR) model that has been trained on a large-scale audio-only corpus. We use a cross-modal distillation method that combines Connectionist Temporal Classification (CTC) with a frame-wise cross-entropy loss. Our contributions are fourfold: (i) we show that ground truth transcriptions are not necessary to train a lip reading system; (ii) we show how arbitrary amounts of unlabelled video data can be leveraged to improve performance; (iii) we demonstrate that distillation significantly speeds up training; and, (iv) we obtain state-of-the-art results on the challenging LRS2 and LRS3 datasets for training only on publicly available data.

PDF Abstract

Results from the Paper


Ranked #13 on Lipreading on LRS3-TED (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Lipreading LRS2 CTC + KD ASR Word Error Rate (WER) 53.2% # 14

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Lipreading LRS3-TED CTC + KD Word Error Rate (WER) 59.8 # 13

Methods


No methods listed for this paper. Add relevant methods here