ASR is all you need: cross-modal distillation for lip reading

28 Nov 2019Triantafyllos AfourasJoon Son ChungAndrew Zisserman

The goal of this work is to train strong models for visual speech recognition without requiring human annotated ground truth data. We achieve this by distilling from an Automatic Speech Recognition (ASR) model that has been trained on a large-scale audio-only corpus... (read more)

PDF Abstract

Results from the Paper


Ranked #5 on Lipreading on LRS2 (using extra training data)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Lipreading LRS2 CTC + KD ASR Word Error Rate (WER) 53.2% # 5

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet