Given a signed video input the task is to predict the (sequence of) sign(s) that are performed.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Sign language is commonly used by deaf or speech impaired people to communicate but requires significant effort to master.
Ranked #1 on Sign Language Recognition on AUTSL (using extra training data)
Transfer learning is implemented by pre-training a network on the American Sign Language dataset MS-ASL and subsequently fine-tuning it separately on three different sizes of the German Sign Language dataset SIGNUM.
For that reason, we apply attention to synchronize and help capture entangled dependencies between the different sign language components.
Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences.
This work presents a meta study covering around 300 published sign language recognition papers with over 400 experimental results.
Recent progress in fine-grained gesture and action classification, and machine translation, point to the possibility of automated sign language recognition becoming a reality.
This contradicts previous claims that GT gloss translation acts as an upper bound for SLT performance and reveals that glosses are an inefficient representation of sign language.
Ranked #1 on Sign Language Translation on ASLG-PC12