Transformer-based Acoustic Modeling for Hybrid Speech Recognition

We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition. Several modeling choices are discussed in this work, including various positional embedding methods and an iterated loss to enable training deep transformers. We also present a preliminary study of using limited right context in transformer models, which makes it possible for streaming applications. We demonstrate that on the widely used Librispeech benchmark, our transformer-based AM outperforms the best published hybrid result by 19% to 26% relative when the standard n-gram language model (LM) is used. Combined with neural network LM for rescoring, our proposed approach achieves state-of-the-art results on Librispeech. Our findings are also confirmed on a much larger internal dataset.

PDF Abstract

Datasets


Results from the Paper


Ranked #23 on Speech Recognition on LibriSpeech test-other (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Recognition LibriSpeech test-clean Hybrid + Transformer LM rescoring Word Error Rate (WER) 2.26 # 26
Speech Recognition LibriSpeech test-other hybrid + Transformer LM rescoring Word Error Rate (WER) 4.85 # 23

Methods