Gesture Generation
29 papers with code • 2 benchmarks • 3 datasets
Generation of gestures, as a sequence of 3d poses
Libraries
Use these libraries to find Gesture Generation models and implementationsMost implemented papers
robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
robosuite is a simulation framework for robot learning powered by the MuJoCo physics engine.
The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation
On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.
Learning Individual Styles of Conversational Gesture
Specifically, we perform cross-modal translation from "in-the-wild'' monologue speech of a single speaker to their hand and arm motion.
Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity
In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures.
Analyzing Input and Output Representations for Speech-Driven Gesture Generation
We evaluate different representation sizes in order to find the most effective dimensionality for the representation.
Gesticulator: A framework for semantically-aware speech-driven gesture generation
During speech, people spontaneously gesticulate, which plays a key role in conveying information.
Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows
In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.
Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations.
Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach
A key challenge, called gesture style transfer, is to learn a model that generates these gestures for a speaking agent 'A' in the gesturing style of a target speaker 'B'.
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures
We study relationships between spoken language and co-speech gestures in context of two key challenges.