One-Shot 3D Action Recognition
5 papers with code • 1 benchmarks • 2 datasets
Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition.
Further, we show that our approach generalizes well in experiments on the UTD-MHAD dataset for inertial, skeleton and fused data and the Simitate dataset for motion capturing data.
One-shot action recognition allows the recognition of human-performed actions with only a single training example.
We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources.