End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures

We study pseudo-labeling for the semi-supervised training of ResNet, Time-Depth Separable ConvNets, and Transformers for speech recognition, with either CTC or Seq2Seq loss functions. We perform experiments on the standard LibriSpeech dataset, and leverage additional unlabeled data from LibriVox through pseudo-labeling. We show that while Transformer-based acoustic models have superior performance with the supervised dataset alone, semi-supervision improves all models across architectures and loss functions and bridges much of the performance gaps between them. In doing so, we reach a new state-of-the-art for end-to-end acoustic models decoded with an external language model in the standard supervised learning setting, and a new absolute state-of-the-art with semi-supervised training. Finally, we study the effect of leveraging different amounts of unlabeled audio, propose several ways of evaluating the characteristics of unlabeled audio which improve acoustic modeling, and show that acoustic models trained with more audio rely less on external language models.

PDF Abstract

Datasets


Results from the Paper


Ranked #16 on Speech Recognition on LibriSpeech test-other (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Recognition LibriSpeech test-clean Conv + Transformer AM + Pseudo-Labeling (ConvLM with Transformer Rescoring) Word Error Rate (WER) 2.03 # 20
Speech Recognition LibriSpeech test-clean Conv + Transformer AM (ConvLM with Transformer Rescoring) (LS only) Word Error Rate (WER) 2.31 # 29
Speech Recognition LibriSpeech test-other Conv + Transformer AM (ConvLM with Transformer Rescoring) Word Error Rate (WER) 4.11 # 16
Speech Recognition LibriSpeech test-other Conv + Transformer AM (ConvLM with Transformer Rescoring) (LS only) Word Error Rate (WER) 5.18 # 26

Methods