Sign Language Recognition

67 papers with code • 10 benchmarks • 19 datasets

Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.

( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )

Latest papers with no code

Deep Learning Recognition for Arabic Alphabet Sign Language RGB Dataset

no code yet • Journal of Computer and Communications 2024

This paper introduces a Convolutional Neural Network (CNN) model for Arabic Sign Language (AASL) recognition, using the AASL dataset.

Systemic Biases in Sign Language AI Research: A Deaf-Led Call to Reevaluate Research Agendas

no code yet • 5 Mar 2024

Growing research in sign language recognition, generation, and translation AI has been accompanied by calls for ethical development of such technologies.

Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation

no code yet • 29 Feb 2024

Changes in facial expression, head movement, body movement and gesture movement are remarkable cues in sign language recognition, and most of the current continuous sign language recognition(CSLR) research methods mainly focus on static images in video sequences at the frame-level feature extraction stage, while ignoring the dynamic changes in the images.

A Transformer Model for Boundary Detection in Continuous Sign Language

no code yet • 22 Feb 2024

One of the prominent challenges in CSLR pertains to accurately detecting the boundaries of isolated signs within a continuous video stream.

Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST) Activation Under Data Constraints

no code yet • 14 Feb 2024

Activation functions enable neural networks to learn complex representations by introducing non-linearities.

APALU: A Trainable, Adaptive Activation Function for Deep Learning Networks

no code yet • 13 Feb 2024

Addressing these limitations, we introduce a novel trainable activation function, adaptive piecewise approximated activation linear unit (APALU), to enhance the learning performance of deep learning across a broad range of tasks.

Connecting the Dots: Leveraging Spatio-Temporal Graph Neural Networks for Accurate Bangla Sign Language Recognition

no code yet • 22 Jan 2024

Recent advances in Deep Learning and Computer Vision have been successfully leveraged to serve marginalized communities in various contexts.

SignVTCL: Multi-Modal Continuous Sign Language Recognition Enhanced by Visual-Textual Contrastive Learning

no code yet • 22 Jan 2024

Sign language recognition (SLR) plays a vital role in facilitating communication for the hearing-impaired community.

Training program on sign language: social inclusion through Virtual Reality in ISENSE project

no code yet • 15 Jan 2024

The ISENSE project has been created to assist students with deafness during their academic life by proposing different technological tools for teaching sign language to the hearing community in the academic context.

Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

no code yet • 19 Dec 2023

The count of people suffering from various levels of hearing loss reached 1. 57 billion in 2019.