How2Sign (A Large-scale Multimodal Dataset for Continuous American Sign Language)

Introduced by Duarte et al. in How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language

The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets