Detection and tracking of fingertips for geometric transformation of objects in virtual environment

16 Mar 2020  ·  Mohammad Mahmudul Alam, S. M. Mahbubur Rahman ·

This paper presents an approach of two-stage convolutional neural network (CNN) for detection of fingertips so that an interaction of the fingertips with a 3D object in the virtual environment (VR) can be established. The first-stage CNN is assigned to detect and locate the hand. Subsequently, the detected hand is cropped, resized, and fed to the second stage CNN for predicting the coordinates of fingertips. Next, a tracker is employed to track the hand continuously so that the system becomes reliable in real-time performance. The VR environments are designed to demonstrate the performance of the fingertip-based interaction system. The proposed method focuses on the geometric transformation of a virtual 3D object by using the gesture of the thumb and index finger. In particular, the distance of the thumb and index fingertips is employed to scale a 3D object in virtual environment. To realize the system, a dataset of 1000 images, named, Thumb Index 1000 (TI1K) dataset, is developed including those variations that are commonly-seen in real-life thumb and index fingers. The system is evaluated with the aid of a number of participants and virtual objects that are distinctive in nature. The proposed approach attains the desired goal and performs in real-time seamlessly to facilitate the human-computer interaction in the VR environment.

PDF

Datasets


Introduced in the Paper:

TI1K Dataset

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here