End-to-End Offline Speech Translation System for IWSLT 2020 using Modality Agnostic Meta-Learning

In this paper, we describe the system submitted to the IWSLT 2020 Offline Speech Translation Task. We adopt the Transformer architecture coupled with the meta-learning approach to build our end-to-end Speech-to-Text Translation (ST) system. Our meta-learning approach tackles the data scarcity of the ST task by leveraging the data available from Automatic Speech Recognition (ASR) and Machine Translation (MT) tasks. The meta-learning approach combined with synthetic data augmentation techniques improves the model performance significantly and achieves BLEU scores of 24.58, 27.51, and 27.61 on IWSLT test 2015, MuST-C test, and Europarl-ST test sets respectively.

PDF Abstract

Datasets


Results from the Paper


Ranked #3 on Speech-to-Text Translation on MuST-C EN->DE (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Speech-to-Text Translation MuST-C EN->DE Transformer + Meta Learning(ASR/MT) + Data Augmentation Case-sensitive sacreBLEU 27.51 # 3

Methods