This 27 Class American Sign Language-based dataset consists of photographs collected from 173 individuals asked to display gestures with their hands. Using a camera, these were taken to a 3024 by 3024 pixels frame size within RGB color space. 130 photos were taken from each person, 5 per class (minor changes on sample sizes in classes can be observed) - 26 classes containing phrases, letters, and numbers with a 27th class null category made up of 314 images for control purposes. The main motivation was contributing to technology development use cases that can reduce the communication challenges faced speech-impaired people with new data to meet the diversity and sample size necessary for intelligent computer vision studies and sign language applications.
Paper | Code | Results | Date | Stars |
---|