Better Sign Language Translation with STMC-Transformer

COLING 2020  ·  Kayo Yin, Jesse Read ·

Sign Language Translation (SLT) first uses a Sign Language Recognition (SLR) system to extract sign language glosses from videos. Then, a translation system generates spoken language translations from the sign language glosses. This paper focuses on the translation system and introduces the STMC-Transformer which improves on the current state-of-the-art by over 5 and 7 BLEU respectively on gloss-to-text and video-to-text translation of the PHOENIX-Weather 2014T dataset. On the ASLG-PC12 corpus, we report an increase of over 16 BLEU. We also demonstrate the problem in current methods that rely on gloss supervision. The video-to-text translation of our STMC-Transformer outperforms translation of GT glosses. This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language. For future SLT research, we therefore suggest an end-to-end training of the recognition and translation models, or using a different sign language annotation scheme.

PDF Abstract COLING 2020 PDF COLING 2020 Abstract

Results from the Paper


 Ranked #1 on Sign Language Translation on ASLG-PC12 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Sign Language Translation ASLG-PC12 Transformer Ens. BLEU-4 82.87 # 1
Sign Language Translation RWTH-PHOENIX-Weather 2014 T STMC+Transformer (Ens) BLEU-4 25.40 # 4

Methods