Human action generation
10 papers with code • 7 benchmarks • 8 datasets
Yan et al. (2019) CSGN:
"When the dancer is stepping, jumping and spinning on the stage, attentions of all audiences are attracted by the streamof the fluent and graceful movements. Building a model that is capable of dancing is as fascinating a task as appreciating the performance itself. In this paper, we aim to generate long-duration human actions represented as skeleton sequences, e.g. those that cover the entirety of a dance, with hundreds of moves and countless possible combinations."
( Image credit: Convolutional Sequence Generation for Skeleton-Based Action Synthesis )
Latest papers
FLAG3D: A 3D Fitness Activity Dataset with Language Instruction
With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision.
Action-conditioned On-demand Motion Generation
We propose a novel framework, On-Demand MOtion Generation (ODMO), for generating realistic and diverse long-term 3D human motion sequences conditioned only on action types with an additional capability of customization.
Generative Adversarial Graph Convolutional Networks for Human Action Synthesis
Synthesising the spatial and temporal dynamics of the human body skeleton remains a challenging task, not only in terms of the quality of the generated shapes, but also of their diversity, particularly to synthesise realistic body movements of a specific action (action conditioning).
MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
We introduce MUGL, a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion.
Action-Conditioned 3D Human Motion Synthesis with Transformer VAE
By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action.
Action2Motion: Conditioned Generation of 3D Human Motions
Action recognition is a relatively established task, where givenan input sequence of human motion, the goal is to predict its ac-tion category.
Structure-Aware Human-Action Generation
Generating long-range skeleton-based human actions has been a challenging problem since small deviations of one frame can cause a malformed action sequence.
Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions
In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.
Human Action Generation with Generative Adversarial Networks
Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions.
Conditional Generative Adversarial Nets
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models.