Multi-Variant Consistency based Self-supervised Learning for Robust Automatic Speech Recognition

23 Dec 2021  ·  Changfeng Gao, Gaofeng Cheng, Pengyuan Zhang ·

Automatic speech recognition (ASR) has shown rapid advances in recent years but still degrades significantly in far-field and noisy environments. The recent development of self-supervised learning (SSL) technology can improve the ASR performance by pre-training the model with additional unlabeled speech and the SSL pre-trained model has achieved the state-of-the-art result on several speech benchmarks. Nevertheless, most of the previous SSL methods ignore the influence of the background noise or reverberation, which is crucial to deploying ASR systems in real-world speech applications. This study addresses the robust ASR by introducing a multi-variant consistency (MVC) based SSL method that adapts to different environments. The MVC-SSL is a robust SSL pre-training method designed for noisy and distant-talking speech in real-world applications. Compared to the previous SSL method, the MVC-SSL can calculate the contrastive loss among audios from different acoustic conditions or channels and can learn invariant representations with the change in the environment or the recording equipment. We also explore different SSL training pipelines to balance the noisy distant-talking speech and extra high resource clean speech. We evaluate the proposed method on the commercially-motivated dataset, CHiME-4, and the meeting dataset, AMI. With the help of the MVC-SSL and appropriate training pipeline, we can achieve up to 30% relative word error rate reductions over the baseline wav2vec2.0, one of the most successful SSL methods for ASR.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods