14 papers with code • 0 benchmarks • 3 datasets
These leaderboards are used to track progress in Action Generation
Most implemented papers
Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction
We propose to decompose instruction execution to goal prediction and action generation.
Human Action Generation with Generative Adversarial Networks
Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions.
Efficient Motion Planning for Automated Lane Change based on Imitation Learning and Mixed-Integer Optimization
Traditional motion planning methods suffer from several drawbacks in terms of optimality, efficiency and generalization capability.
Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions
In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.
Graph Constrained Reinforcement Learning for Natural Language Action Spaces
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language.
Structure-Aware Human-Action Generation
Generating long-range skeleton-based human actions has been a challenging problem since small deviations of one frame can cause a malformed action sequence.
Action2Motion: Conditioned Generation of 3D Human Motions
Action recognition is a relatively established task, where givenan input sequence of human motion, the goal is to predict its ac-tion category.
Keep CALM and Explore: Language Models for Action Generation in Text-based Games
In this paper, we propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
Generative Adversarial Graph Convolutional Networks for Human Action Synthesis
Synthesising the spatial and temporal dynamics of the human body skeleton remains a challenging task, not only in terms of the quality of the generated shapes, but also of their diversity, particularly to synthesise realistic body movements of a specific action (action conditioning).
MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
We introduce MUGL, a novel deep neural model for large-scale, diverse generation of single and multi-person pose-based action sequences with locomotion.