Towards Privacy-Aware Sign Language Translation at Scale

14 Feb 2024  ·  Phillip Rust, Bowen Shi, Skyler Wang, Necati Cihan Camgöz, Jean Maillard ·

A major impediment to the advancement of sign language translation (SLT) is data scarcity. Much of the sign language data currently available on the web cannot be used for training supervised models due to the lack of aligned captions. Furthermore, scaling SLT using large-scale web-scraped datasets bears privacy risks due to the presence of biometric information, which the responsible development of SLT technologies should account for. In this work, we propose a two-stage framework for privacy-aware SLT at scale that addresses both of these issues. We introduce SSVP-SLT, which leverages self-supervised video pretraining on anonymized and unannotated videos, followed by supervised SLT finetuning on a curated parallel dataset. SSVP-SLT achieves state-of-the-art finetuned and zero-shot gloss-free SLT performance on the How2Sign dataset, outperforming the strongest respective baselines by over 3 BLEU-4. Based on controlled experiments, we further discuss the advantages and limitations of self-supervised pretraining and anonymization via facial obfuscation for SLT.

PDF Abstract

Datasets


Introduced in the Paper:

DailyMoth-70h

Used in the Paper:

How2Sign YouTube-ASL

Results from the Paper


 Ranked #1 on Gloss-free Sign Language Translation on How2Sign (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Gloss-free Sign Language Translation DailyMoth-70h SSVP-SLT BLEU 28.8 # 1
Gloss-free Sign Language Translation How2Sign  SSVP-SLT BLEU-4 15.5 # 1
bleurt 49.6 # 1
ROUGE-L 38.4 # 1
BLEU-1 43.2 # 1

Methods


No methods listed for this paper. Add relevant methods here