Application of Transfer Learning to Sign Language Recognition using an Inflated 3D Deep Convolutional Neural Network

25 Feb 2021  ·  Roman Töngi ·

Sign language is the primary language for people with a hearing loss. Sign language recognition (SLR) is the automatic recognition of sign language, which represents a challenging problem for computers, though some progress has been made recently using deep learning. Huge amounts of data are generally required to train deep learning models. However, corresponding datasets are missing for the majority of sign languages. Transfer learning is a technique to utilize a related task with an abundance of data available to help solve a target task lacking sufficient data. Transfer learning has been applied highly successfully in computer vision and natural language processing. However, much less research has been conducted in the field of SLR. This paper investigates how effectively transfer learning can be applied to isolated SLR using an inflated 3D convolutional neural network as the deep learning architecture. Transfer learning is implemented by pre-training a network on the American Sign Language dataset MS-ASL and subsequently fine-tuning it separately on three different sizes of the German Sign Language dataset SIGNUM. The results of the experiments give clear empirical evidence that transfer learning can be effectively applied to isolated SLR. The accuracy performances of the networks applying transfer learning increased substantially by up to 21% as compared to the baseline models that were not pre-trained on the MS-ASL dataset.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods