Zero-Shot Sign Language Recognition: Can Textual Data Uncover Sign Languages?

24 Jul 2019Yunus Can BilgeNazli Ikizler-CinbisRamazan Gokberk Cinbis

We introduce the problem of zero-shot sign language recognition (ZSSLR), where the goal is to leverage models learned over the seen sign class examples to recognize the instances of unseen signs. To this end, we propose to utilize the readily available descriptions in sign language dictionaries as an intermediate-level semantic representation for knowledge transfer... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.