no code implementations • 30 Jun 2021 • Mizuki Maruyama, Shuvozit Ghose, Katsufumi Inoue, Partha Pratim Roy, Masakazu Iwamura, Michifumi Yoshioka
Thus in this work, we utilized local region images of both hands and face, along with skeletal information to capture local information and the positions of both hands relative to the body, respectively.
Ranked #2 on Sign Language Recognition on WLASL100 (using extra training data)