Surgical Hands is a dataset that provides multi-instance articulated hand pose annotations for in-vivo videos. The dataset contains 76 video clips from 28 publicly available surgical videos and over 8.1k annotated hand pose instances.
1 PAPER • NO BENCHMARKS YET
The 3D Poses in the Wild dataset is the first dataset in the wild with accurate 3D poses for evaluation. While other datasets outdoors exist, they are all restricted to a small recording volume. 3DPW is the first one that includes video footage taken from a moving phone camera.
343 PAPERS • 5 BENCHMARKS
The EgoDexter dataset provides both 2D and 3D pose annotations for 4 testing video sequences with 3190 frames. The videos are recorded with body-mounted camera from egocentric viewpoints and contain cluttered backgrounds, fast camera motion, and complex interactions with various objects. Fingertip positions were manually annotated for 1485 out of 3190 frames.
10 PAPERS • NO BENCHMARKS YET
The SynthHands dataset is a dataset for hand pose estimation which consists of real captured hand motion retargeted to a virtual hand with natural backgrounds and interactions with different objects. The dataset contains data for male and female hands, both with and without interaction with objects. While the hand and foreground object are synthtically generated using Unity, the motion was obtained from real performances as described in the accompanying paper. In addition, real object textures and background images (depth and color) were used. Ground truth 3D positions are provided for 21 keypoints of the hand.
5 PAPERS • NO BENCHMARKS YET
The Hands in action dataset (HIC) dataset has RGB-D sequences of hands interacting with objects.
9 PAPERS • NO BENCHMARKS YET
First-Person Hand Action Benchmark is a collection of RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations.
13 PAPERS • 2 BENCHMARKS