Sign Language Translation
34 papers with code • 5 benchmarks • 13 datasets
Given a video containing sign language, the task is to predict the translation into (written) spoken language.
Image credit: How2Sign
Datasets
Latest papers with no code
LLMs are Good Sign Language Translators
Sign Language Translation (SLT) is a challenging task that aims to translate sign videos into spoken language.
Factorized Learning Assisted with Large Language Model for Gloss-free Sign Language Translation
Although some approaches work towards gloss-free SLT through jointly training the visual encoder and translation network, these efforts still suffer from poor performance and inefficient use of the powerful Large Language Model (LLM).
Using an LLM to Turn Sign Spottings into Spoken Language Sentences
Sign Language Translation (SLT) is a challenging task that aims to generate spoken language sentences from sign language videos.
Towards Privacy-Aware Sign Language Translation at Scale
A major impediment to the advancement of sign language translation (SLT) is data scarcity.
Unsupervised Sign Language Translation and Generation
Motivated by the success of unsupervised neural machine translation (UNMT), we introduce an unsupervised sign language translation and generation network (USLNet), which learns from abundant single-modality (text and video) data without parallel sign language data.
ChatGPT, Let us Chat Sign Language: Experiments, Architectural Elements, Challenges and Research Directions
Existing research work on ChatGPT focused on its use in various domains.
VK-G2T: Vision and Context Knowledge enhanced Gloss2Text
Existing sign language translation methods follow a two-stage pipeline: first converting the sign language video to a gloss sequence (i. e. Sign2Gloss) and then translating the generated gloss sequence into a spoken language sentence (i. e. Gloss2Text).
sign.mt: Real-Time Multilingual Sign Language Translation Application
Harnessing state-of-the-art open-source models, this tool aims to address the communication divide between the hearing and the deaf, facilitating seamless translation in both spoken-to-signed and signed-to-spoken translation directions.
A New Dataset for End-to-End Sign Language Translation: The Greek Elementary School Dataset
A characteristic example is Phoenix2014T benchmark dataset, which only covers weather forecasts in German Sign Language.
Sign Language Translation with Iterative Prototype
Technically, IP-SLT consists of feature extraction, prototype initialization, and iterative prototype refinement.