E-Branchformer: Branchformer with Enhanced merging for speech recognition

30 Sep 2022  ·  Kwangyoun Kim, Felix Wu, Yifan Peng, Jing Pan, Prashant Sridhar, Kyu J. Han, Shinji Watanabe ·

Conformer, combining convolution and self-attention sequentially to capture both local and global information, has shown remarkable performance and is currently regarded as the state-of-the-art for automatic speech recognition (ASR). Several other studies have explored integrating convolution and self-attention but they have not managed to match Conformer's performance. The recently introduced Branchformer achieves comparable performance to Conformer by using dedicated branches of convolution and self-attention and merging local and global context from each branch. In this paper, we propose E-Branchformer, which enhances Branchformer by applying an effective merging method and stacking additional point-wise modules. E-Branchformer sets new state-of-the-art word error rates (WERs) 1.81% and 3.65% on LibriSpeech test-clean and test-other sets without using any external training data.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Recognition LibriSpeech test-clean E-Branchformer (L) + Internal Language Model Estimation Word Error Rate (WER) 1.81 # 11
Speech Recognition LibriSpeech test-other E-Branchformer (L) + Internal Language Model Estimation Word Error Rate (WER) 3.65 # 9

Methods