Motion Captioning

6 papers with code • 2 benchmarks • 2 datasets

Generating textual description for human motion.

Most implemented papers

MotionGPT: Human Motion as a Foreign Language

openmotionlab/motiongpt NeurIPS 2023

Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language.

TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts

EricGuo5513/TM2T 4 Jul 2022

Our approach is flexible, could be used for both text2motion and motion2text tasks.

Guided Attention for Interpretable Motion Captioning

rd20karim/m2t-interpretable 11 Oct 2023

Diverse and extensive work has recently been conducted on text-conditioned human motion generation.

Motion2Language, unsupervised learning of synchronized semantic motion segmentation

rd20karim/M2T-Segmentation 16 Oct 2023

We find that both contributions to the attention mechanism and the encoder architecture additively improve the quality of generated text (BLEU and semantic equivalence), but also of synchronization.

Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs

szqwu/Motion-Agent 27 May 2024

This is accomplished by encoding and quantizing motions into discrete tokens that align with the language model's vocabulary.

Transformer with Controlled Attention for Synchronous Motion Captioning

rd20karim/synch-transformer 13 Sep 2024

In this paper, we address a challenging task, synchronous motion captioning, that aim to generate a language description synchronized with human motion sequences.