Gesture Generation

29 papers with code • 2 benchmarks • 3 datasets

Generation of gestures, as a sequence of 3d poses


Use these libraries to find Gesture Generation models and implementations

Most implemented papers

robosuite: A Modular Simulation Framework and Benchmark for Robot Learning

ARISE-Initiative/robosuite 25 Sep 2020

robosuite is a simulation framework for robot learning powered by the MuJoCo physics engine.

The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

genea-workshop/genea_numerical_evaluations 22 Aug 2022

On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.

Learning Individual Styles of Conversational Gesture

amirbar/speech2gesture CVPR 2019

Specifically, we perform cross-modal translation from "in-the-wild'' monologue speech of a single speaker to their hand and arm motion.

Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity

ai4r/Gesture-Generation-from-Trimodal-Context 4 Sep 2020

In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures.

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

GestureGeneration/Speech_driven_gesture_generation_with_autoencoder arXiv 2019

We evaluate different representation sizes in order to find the most effective dimensionality for the representation.

Gesticulator: A framework for semantically-aware speech-driven gesture generation

Svito-zar/gesticulator 25 Jan 2020

During speech, people spontaneously gesticulate, which plays a key role in conveying information.

Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows

simonalexanderson/StyleGestures Computer Graphics Forum 2020

In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.

Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation

GestureGeneration/Speech_driven_gesture_generation_with_autoencoder 16 Jul 2020

We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations.

Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach

chahuja/mix-stage ECCV 2020

A key challenge, called gesture style transfer, is to learn a model that generates these gestures for a speaking agent 'A' in the gesturing style of a target speaker 'B'.