Sign Language Recognition

70 papers with code • 13 benchmarks • 21 datasets

Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.

( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )

Latest papers with no code

Training program on sign language: social inclusion through Virtual Reality in ISENSE project

no code yet • 15 Jan 2024

The ISENSE project has been created to assist students with deafness during their academic life by proposing different technological tools for teaching sign language to the hearing community in the academic context.

Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

no code yet • 19 Dec 2023

The count of people suffering from various levels of hearing loss reached 1. 57 billion in 2019.

SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark

no code yet • 31 Oct 2023

We present SignAvatars, the first large-scale, multi-prompt 3D sign language (SL) motion dataset designed to bridge the communication gap for Deaf and hard-of-hearing individuals.

LSA64: An Argentinian Sign Language Dataset

no code yet • 26 Oct 2023

The dataset, called LSA64, contains 3200 videos of 64 different LSA signs recorded by 10 subjects, and is a first step towards building a comprehensive research-level dataset of Argentinian signs, specifically tailored to sign language recognition or other machine learning tasks.

Handshape recognition for Argentinian Sign Language using ProbSom

no code yet • Journal of Computer Science and Technology (JCST) 2016

Automatic sign language recognition is an important topic within the areas of human-computer interaction and machine learning.

Sign Languague Recognition without frame-sequencing constraints: A proof of concept on the Argentinian Sign Language

no code yet • 26 Oct 2023

The model employs a bag-of-words approach in all classification steps, to explore the hypothesis that ordering is not essential for recognition.

A Sign Language Recognition System with Pepper, Lightweight-Transformer, and LLM

no code yet • 28 Sep 2023

This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL) and facilitate non-verbal human-robot interaction.

Attention-Driven Multi-Modal Fusion: Enhancing Sign Language Recognition and Translation

no code yet • 4 Sep 2023

In this paper, we devise a mechanism for the addition of multi-modal information with an existing pipeline for continuous sign language recognition and translation.

Self-Supervised Video Transformers for Isolated Sign Language Recognition

no code yet • 2 Sep 2023

This paper presents an in-depth analysis of various self-supervision methods for isolated sign language recognition (ISLR).

Multimodal Locally Enhanced Transformer for Continuous Sign Language Recognition

no code yet • Conference of the International Speech Communication Association (INTERSPEECH) 2023

In this paper, we propose a novel Transformer-based approach for continuous sign language recognition (CSLR) from videos, aiming to address the shortcomings of traditional Transformers in learning local semantic context of SL.