Robust Speech Recognition via Large-Scale Weak Supervision

We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Speech Recognition AMI-IHM Whisper Word Error Rate (WER) 16.4 # 1
Speech Recognition AMI SDM1 Whisper Word Error Rate (WER) 36.9 # 2
Speech Recognition Artie Bias Corpus Whisper WER 6.7 # 1
Speech Recognition CALLHOME Whisper Word Error Rate (WER) 15.8 # 1
Speech Recognition CHiME6 Whisper WER 25.6 # 1
Speech Recognition Common Voice Whisper Word Error Rate (WER) 9.5 # 1
Speech Recognition CORAAL Whisper Word Error Rate (WER) 19.4 # 1
Spoken language identification Fleurs Whisper Accuracy (%) 64.1 # 2
Speech Recognition Fleurs (English) Whisper WER 4.6 # 1
Speech Recognition LibriSpeech test-clean Whisper Word Error Rate (WER) 2.7 # 30
Speech Recognition LibriSpeech test-other Whisper Word Error Rate (WER) 5.6 # 25
Speech Recognition Switchboard corpus Whisper Word Error Rate (WER) 13.1 # 1
Speech Recognition Tedlium Whisper Word Error Rate (WER) 4.0 # 1
Speech Recognition Vox Populi Whisper Word Error Rate (WER) 7.3 # 1
Speech Recognition WSJ Whisper Word Error Rate (WER) 3.1 # 1

Methods