Human action generation

10 papers with code • 7 benchmarks • 8 datasets

Yan et al. (2019) CSGN:

"When the dancer is stepping, jumping and spinning on the stage, attentions of all audiences are attracted by the streamof the fluent and graceful movements. Building a model that is capable of dancing is as fascinating a task as appreciating the performance itself. In this paper, we aim to generate long-duration human actions represented as skeleton sequences, e.g. those that cover the entirety of a dance, with hundreds of moves and countless possible combinations."

( Image credit: Convolutional Sequence Generation for Skeleton-Based Action Synthesis )

Latest papers with no code

Convolutional Sequence Generation for Skeleton-Based Action Synthesis

no code yet • ICCV 2019 2019

It captures the temporal structure at multiple scales through the GP prior and the temporal convolutions; and establishes the spatial connection between the latent vectors and the skeleton graphs via a novel graph refining scheme.

Deep Video Generation, Prediction and Completion of Human Action Sequences

no code yet • ECCV 2018

In the second stage, a skeleton-to-image network is trained, which is used to generate a human action video given the complete human pose sequence generated in the first stage.