wav2vec: Unsupervised Pre-training for Speech Recognition

11 Apr 2019  ·  Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli ·

We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.

PDF Abstract

Results from the Paper


Ranked #5 on Speech Recognition on TIMIT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Speech Recognition TIMIT wav2vec Percentage error 14.7 # 5

Methods


No methods listed for this paper. Add relevant methods here