Sign Pose-Based Transformer for Word-Level Sign Language Recognition

WACV 2022  ·  Matyáš Boháček, Marek Hrúz ·

In this paper we present a system for word-level sign language recognition based on the Transformer model. We aim at a solution with low computational cost, since we see great potential in the usage of such recognition system on handheld devices. We base the recognition on the estimation of the pose of the human body in the form of 2D landmark locations. We introduce a robust pose normalization scheme which takes the signing space in considerationand processes the hand poses in a separate local coordinate system, independent on the body pose. We show experimentally the significant impact of this normalization on the accuracy of our proposed system. We introduce several augmentations of the body pose that further improve the accuracy, including a novel sequential joint rotation augmentation. With all the systems in place, we achieve state of theart top-1 results on the WLASL and LSA64 datasets. For WLASL, we are able to successfully recognize 63.18% of sign recordings in the 100-gloss subset, which is a relative improvement of 5% from the prior state of the art. For the 300-gloss subset, we achieve recognition rate of 43.78% which is a relative improvement of 3.8%. With the LSA64 dataset, we report test recognition accuracy of 100%.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sign Language Recognition LSA64 SPOTER Accuracy (%) 100 # 1
Sign Language Recognition WLASL100 SPOTER Top-1 Accuracy 63.18 # 4

Methods