Motion Synthesis

87 papers with code • 9 benchmarks • 13 datasets

in2IN: Leveraging individual Information to Generate Human INteractions

pabloruizponce/in2IN 15 Apr 2024

For this, we introduce in2IN, a novel diffusion model for human-human motion generation which is conditioned not only on the textual description of the overall interaction but also on the individual descriptions of the actions performed by each person involved in the interaction.

4
15 Apr 2024

ParCo: Part-Coordinating Text-to-Motion Synthesis

qrzou/parco 27 Mar 2024

However, these methods encounter challenges such as the lack of coordination between different part motions and difficulties for networks to understand part concepts.

20
27 Mar 2024

Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance

afford-motion/afford-motion 26 Mar 2024

Despite significant advancements in text-to-motion synthesis, generating language-guided human motion within 3D environments poses substantial challenges.

25
26 Mar 2024

Driving Animatronic Robot Facial Expression From Speech

library87/OpenRoboExp 19 Mar 2024

The proposed approach is capable of generating highly realistic, real-time facial expressions from speech on an animatronic face, significantly advancing robots' ability to replicate nuanced human expressions for natural interaction.

10
19 Mar 2024

Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives

li-ronghui/LODGE 15 Mar 2024

In contrast, the second-stage is the local diffusion, which parallelly generates detailed motion sequences under the guidance of the dance primitives and choreographic rules.

61
15 Mar 2024

Seamless Human Motion Composition with Blended Positional Encodings

BarqueroGerman/FlowMDM 23 Feb 2024

Conditional human motion generation is an important topic with many applications in virtual reality, gaming, and robotics.

132
23 Feb 2024

Self-Correcting Self-Consuming Loops for Generative Model Training

nate-gillman/self-correcting-self-consuming 11 Feb 2024

As synthetic data becomes higher quality and proliferates on the internet, machine learning models are increasingly trained on a mix of human- and machine-generated data.

20
11 Feb 2024

IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition

ZikangLeng/IMUGPT 1 Feb 2024

With the emergence of generative AI models such as large language models (LLMs) and text-driven motion synthesis models, language has become a promising source data modality as well as shown in proof of concepts such as IMUGPT.

6
01 Feb 2024

GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion Generation

xuehao-gao/guess 4 Jan 2024

The whole text-driven human motion synthesis problem is then divided into multiple abstraction levels and solved with a multi-stage generation framework with a cascaded latent diffusion model: an initial generator first generates the coarsest human motion guess from a given text description; then, a series of successive generators gradually enrich the motion details based on the textual description and the previous synthesized results.

26
04 Jan 2024

FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing

mingyuan-zhang/FineMoGen NeurIPS 2023

Notably, FineMoGen further enables zero-shot motion editing capabilities with the aid of modern large language models (LLM), which faithfully manipulates motion sequences with fine-grained instructions.

77
22 Dec 2023