In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets.
FINE-GRAINED ACTION RECOGNITION HAND SEGMENTATION SEMANTIC SEGMENTATION
Our model is built on the observation that egocentric activities are highly characterized by the objects and their locations in the video.
Ranked #4 on
Egocentric Activity Recognition
on EGTEA
We propose a two-stage convolutional neural network (CNN) architecture for robust recognition of hand gestures, called HGR-Net, where the first stage performs accurate semantic segmentation to determine hand regions, and the second stage identifies the gesture.
Ranked #1 on
Hand Gesture Segmentation
on OUHANDS
HAND GESTURE RECOGNITION HAND-GESTURE RECOGNITION HAND GESTURE SEGMENTATION HAND SEGMENTATION SEMANTIC SEGMENTATION
To this end, we propose a Bayesian CNN-based model adaptation framework for hand segmentation, which introduces and considers two key factors: 1) prediction uncertainty when the model is applied in a new domain and 2) common information about hand shapes shared across domains.
Hand segmentation and detection in truly unconstrained RGB-based settings is important for many applications.
To overcome this challenge, we develop a neural network which is able to adapt the receptive field not only for each layer but also for each neuron at the spatial location.
Thus, we propose hand segmentation method for hand-object interaction using only a depth map.