UT-Kinect (UTKinect-Action3D Dataset)

Introduced by Xia et al. in View invariant human action recognition using histograms of 3D joints

The UT-Kinect dataset is a dataset for action recognition from depth sequences. The videos were captured using a single stationary Kinect. There are 10 action types: walk, sit down, stand up, pick up, carry, throw, push, pull, wave hands, clap hands. There are 10 subjects, Each subject performs each actions twice. Three channels were recorded: RGB, depth and skeleton joint locations. The three channel are synchronized. The framerate is 30f/s.

Source: https://cvrc.ece.utexas.edu/KinectDatasets/HOJ3D.html


Paper Code Results Date Stars

Dataset Loaders

No data loaders found. You can submit your data loader here.


Similar Datasets


  • Unknown