Sign Language Recognition

68 papers with code • 11 benchmarks • 19 datasets

Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.

( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )

Most implemented papers

Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective

gulvarol/bsldict 22 Aug 2019

Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture.

SignCol: Open-Source Software for Collecting Sign Language Gestures

mohaEs/SignCol 31 Oct 2019

Sign(ed) languages use gestures, such as hand or head movements, for communication.

Lightweight and Unobtrusive Data Obfuscation at IoT Edge for Remote Inference

ntu-aiot/ObfNet 20 Dec 2019

Executing deep neural networks for inference on the server-class or cloud backend based on data generated at the edge of Internet of Things is desirable due primarily to the limited compute power of edge devices and the need to protect the confidentiality of the inference neural networks.

Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation

neccam/slt CVPR 2020

We report state-of-the-art sign language recognition and translation results achieved by our Sign Language Transformers.

Better Sign Language Translation with STMC-Transformer

kayoyin/transformer-slt COLING 2020

This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language.

BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues

gulvarol/bsl1k ECCV 2020

Recent progress in fine-grained gesture and action classification, and machine translation, point to the possibility of automated sign language recognition becoming a reality.

A Comprehensive Study on Deep Learning-based Methods for Sign Language Recognition

iliasprc/slrzoo 24 Jul 2020

In this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted.

Quantitative Survey of the State of the Art in Sign Language Recognition

oskoller/sign-language-state-of-the-art 22 Aug 2020

This work presents a meta study covering around 300 published sign language recognition papers with over 400 experimental results.

Position and Rotation Invariant Sign Language Recognition from 3D Kinect Data with Recurrent Neural Networks

prasunroy/sign-language 23 Oct 2020

Sign language is a gesture-based symbolic communication medium among speech and hearing impaired people.

Self-Mutual Distillation Learning for Continuous Sign Language Recognition

ycmin95/VAC_CSLR ICCV 2021

Currently, a typical network combination for CSLR includes a visual module, which focuses on spatial and short-temporal information, followed by a contextual module, which focuses on long-temporal information, and the Connectionist Temporal Classification (CTC) loss is adopted to train the network.