Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition

ECCV 2020  ·  Zhe Niu, Brian Mak ·

In this paper, we propose novel stochastic modeling of various components of a continuous sign language recognition (CSLR) system that is based on the transformer encoder and connectionist temporal classification (CTC). Most importantly, We model each sign gloss with multiple states, and the number of states is a categorical random variable that follows a learned probability distribution, providing stochastic fine-grained labels for training the CTC decoder. We further propose a stochastic frame dropping mechanism and a gradient stopping method to deal with the severe overfitting problem in training the transformer model with CTC loss. These two methods also help reduce the training computation, both in terms of time and space, significantly. We evaluated our model on popular CSLR datasets, and show its effectiveness compared to the state-of-the-art methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sign Language Recognition RWTH-PHOENIX-Weather 2014 Stochastic CSLR Word Error Rate (WER) 25.3 # 12
Sign Language Recognition RWTH-PHOENIX-Weather 2014 T Stochastic CSLR Word Error Rate (WER) 26.1 # 9

Methods