Sign Language Recognition
67 papers with code • 10 benchmarks • 19 datasets
Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.
( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )
Datasets
Latest papers with no code
Deep Learning Recognition for Arabic Alphabet Sign Language RGB Dataset
This paper introduces a Convolutional Neural Network (CNN) model for Arabic Sign Language (AASL) recognition, using the AASL dataset.
Systemic Biases in Sign Language AI Research: A Deaf-Led Call to Reevaluate Research Agendas
Growing research in sign language recognition, generation, and translation AI has been accompanied by calls for ethical development of such technologies.
Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation
Changes in facial expression, head movement, body movement and gesture movement are remarkable cues in sign language recognition, and most of the current continuous sign language recognition(CSLR) research methods mainly focus on static images in video sequences at the frame-level feature extraction stage, while ignoring the dynamic changes in the images.
A Transformer Model for Boundary Detection in Continuous Sign Language
One of the prominent challenges in CSLR pertains to accurately detecting the boundaries of isolated signs within a continuous video stream.
Enhancing Sequential Model Performance with Squared Sigmoid TanH (SST) Activation Under Data Constraints
Activation functions enable neural networks to learn complex representations by introducing non-linearities.
APALU: A Trainable, Adaptive Activation Function for Deep Learning Networks
Addressing these limitations, we introduce a novel trainable activation function, adaptive piecewise approximated activation linear unit (APALU), to enhance the learning performance of deep learning across a broad range of tasks.
Connecting the Dots: Leveraging Spatio-Temporal Graph Neural Networks for Accurate Bangla Sign Language Recognition
Recent advances in Deep Learning and Computer Vision have been successfully leveraged to serve marginalized communities in various contexts.
SignVTCL: Multi-Modal Continuous Sign Language Recognition Enhanced by Visual-Textual Contrastive Learning
Sign language recognition (SLR) plays a vital role in facilitating communication for the hearing-impaired community.
Training program on sign language: social inclusion through Virtual Reality in ISENSE project
The ISENSE project has been created to assist students with deafness during their academic life by proposing different technological tools for teaching sign language to the hearing community in the academic context.
Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning
The count of people suffering from various levels of hearing loss reached 1. 57 billion in 2019.