4 papers with code • 2 benchmarks • 4 datasets
Yan et al. (2019) CSGN:
"When the dancer is stepping, jumping and spinning on the stage, attentions of all audiences are attracted by the streamof the fluent and graceful movements. Building a model that is capable of dancing is as fascinating a task as appreciating the performance itself. In this paper, we aim to generate long-duration human actions represented as skeleton sequences, e.g. those that cover the entirety of a dance, with hundreds of moves and countless possible combinations."
( Image credit: Convolutional Sequence Generation for Skeleton-Based Action Synthesis )
It captures the temporal structure at multiple scales through the GP prior and the temporal convolutions; and establishes the spatial connection between the latent vectors and the skeleton graphs via a novel graph refining scheme.
Action recognition is a relatively established task, where givenan input sequence of human motion, the goal is to predict its ac-tion category.
Generating long-range skeleton-based human actions has been a challenging problem since small deviations of one frame can cause a malformed action sequence.
Ranked #1 on Human action generation on NTU RGB+D
Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions.