Gesture Generation

9 papers with code • 0 benchmarks • 0 datasets

Generation of gestures, as a sequence of 3d poses


Greatest papers with code

Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows

simonalexanderson/StyleGestures Computer Graphics Forum 2020

In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.

Gesture Generation motion synthesis +1

Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity

ai4r/Gesture-Generation-from-Trimodal-Context 4 Sep 2020

In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures.

Gesture Generation

Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation

GestureGeneration/Speech_driven_gesture_generation_with_autoencoder 16 Jul 2020

We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations.

Gesture Generation Representation Learning

Gesticulator: A framework for semantically-aware speech-driven gesture generation

Svito-zar/gesticulator 25 Jan 2020

During speech, people spontaneously gesticulate, which plays a key role in conveying information.

Gesture Generation

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

GestureGeneration/Speech_driven_gesture_generation_with_autoencoder arXiv 2019

We evaluate different representation sizes in order to find the most effective dimensionality for the representation.

Gesture Generation Human-Computer Interaction I.2.6; I.5.1; J.4

A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents

nagyrajmund/gesticulating_agent_unity 24 Feb 2021

To date, recent end-to-end gesture generation methods have not been evaluated in a real-time interaction with users.

Chatbot Gesture Generation

DeepNAG: Deep Non-Adversarial Gesture Generation

Maghoumi/DeepNAG 18 Nov 2020

We find that DeepNAG outperforms DeepGAN in accuracy, training time (up to 17x faster), and realism, thereby opening the door to a new line of research in generator network design and training for gesture synthesis.

Data Augmentation Dynamic Time Warping +2

No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures

chahuja/aisle 1 Oct 2020

We study relationships between spoken language and co-speech gestures in context of two key challenges.

Gesture Generation

Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach

chahuja/mix-stage ECCV 2020

A key challenge, called gesture style transfer, is to learn a model that generates these gestures for a speaking agent 'A' in the gesturing style of a target speaker 'B'.

Gesture Generation Style Transfer