Sign Language Recognition

67 papers with code • 10 benchmarks • 19 datasets

Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.

( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )

Most implemented papers

Visual Alignment Constraint for Continuous Sign Language Recognition

ycmin95/VAC_CSLR ICCV 2021

Specifically, the proposed VAC comprises two auxiliary losses: one focuses on visual features only, and the other enforces prediction alignment between the feature extractor and the alignment module.

Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble

jackyjsy/sam-slr-v2 12 Oct 2021

Current Sign Language Recognition (SLR) methods usually extract features via deep neural networks and suffer overfitting due to limited and noisy data.

Self-Emphasizing Network for Continuous Sign Language Recognition

hulianyuyy/sen_cslr 30 Nov 2022

To relieve this problem, we propose a self-emphasizing network (SEN) to emphasize informative spatial regions in a self-motivated way, with few extra computations and without additional expensive supervision.

Improving Sign Recognition with Phonology

leekezar/improvingsignrecognitionwithphonology 11 Feb 2023

We use insights from research on American Sign Language (ASL) phonology to train models for isolated sign language recognition (ISLR), a step towards automatic sign language understanding.

Improving Continuous Sign Language Recognition with Adapted Image Models

hulianyuyy/adaptsign 12 Apr 2024

Besides, fully fine-tuning the model easily forgets the generic essential knowledge acquired in the pretraining stage and overfits the downstream data.

Real-time Sign Language Fingerspelling Recognition using Convolutional Neural Networks from Depth map

byeongkeun-kang/FingerspellingRecognition 10 Sep 2015

We train CNNs for the classification of 31 alphabets and numbers using a subset of collected depth data from multiple subjects.

A Study of Convolutional Architectures for Handshape Recognition applied to Sign Language

midusi/convolutional_handshape CACIC 2017

Using the LSA16 and RWTH-PHOENIX-Weather handshape datasets, we performed experiments with the LeNet, VGG16, ResNet-34 and All Convolutional architectures, as well as Inception with normal training and via transfer learning, and compared them to the state of the art in these datasets.

Neural Sign Language Translation

neccam/nslt CVPR 2018

SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language.

Temporal Unet: Sample Level Human Action Recognition using WiFi

geekfeiw/WiSLAR 19 Apr 2019

In this task, every WiFi distortion sample in the whole series should be categorized into one action, which is a critical technique in precise action localization, continuous action segmentation, and real-time action recognition.

A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training

iliasprc/slrzoo IEEE Transactions on Multimedia 2019

In contrast, our proposed architecture adopts deep convolutional neural networks with stacked temporal fusion layers as the feature extraction module, and bi-directional recurrent neural networks as the sequence learning module.