Sign Language Recognition
68 papers with code • 11 benchmarks • 19 datasets
Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.
( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )
Datasets
Most implemented papers
Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective
Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture.
SignCol: Open-Source Software for Collecting Sign Language Gestures
Sign(ed) languages use gestures, such as hand or head movements, for communication.
Lightweight and Unobtrusive Data Obfuscation at IoT Edge for Remote Inference
Executing deep neural networks for inference on the server-class or cloud backend based on data generated at the edge of Internet of Things is desirable due primarily to the limited compute power of edge devices and the need to protect the confidentiality of the inference neural networks.
Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation
We report state-of-the-art sign language recognition and translation results achieved by our Sign Language Transformers.
Better Sign Language Translation with STMC-Transformer
This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language.
BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues
Recent progress in fine-grained gesture and action classification, and machine translation, point to the possibility of automated sign language recognition becoming a reality.
A Comprehensive Study on Deep Learning-based Methods for Sign Language Recognition
In this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted.
Quantitative Survey of the State of the Art in Sign Language Recognition
This work presents a meta study covering around 300 published sign language recognition papers with over 400 experimental results.
Position and Rotation Invariant Sign Language Recognition from 3D Kinect Data with Recurrent Neural Networks
Sign language is a gesture-based symbolic communication medium among speech and hearing impaired people.
Self-Mutual Distillation Learning for Continuous Sign Language Recognition
Currently, a typical network combination for CSLR includes a visual module, which focuses on spatial and short-temporal information, followed by a contextual module, which focuses on long-temporal information, and the Connectionist Temporal Classification (CTC) loss is adopted to train the network.