Free-T2M: Frequency Enhanced Text-to-Motion Diffusion Model With Consistency Loss

30 Jan 2025  ·  Wenshuo Chen, Haozhe Jia, Songning Lai, Keming Wu, Hongru Xiao, Lijie Hu, Yutao Yue ·

Rapid progress in text-to-motion generation has been largely driven by diffusion models. However, existing methods focus solely on temporal modeling, thereby overlooking frequency-domain analysis. We identify two key phases in motion denoising: the **semantic planning stage** and the **fine-grained improving stage**. To address these phases effectively, we propose **Fre**quency **e**nhanced **t**ext-**to**-**m**otion diffusion model (**Free-T2M**), incorporating stage-specific consistency losses that enhance the robustness of static features and improve fine-grained accuracy. Extensive experiments demonstrate the effectiveness of our method. Specifically, on StableMoFusion, our method reduces the FID from **0.189** to **0.051**, establishing a new SOTA performance within the diffusion architecture. These findings highlight the importance of incorporating frequency-domain insights into text-to-motion generation for more precise and robust results.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Motion Synthesis HumanML3D Free-T2M (StableMoFusion) FID 0.051 # 7
Diversity 9.480 # 25
R Precision Top3 0.803 # 9
Motion Synthesis KIT Motion-Language Free-T2M (StableMoFusion) FID 0.155 # 2
R Precision Top3 0.789 # 3
Diversity 10.902 # 14

Methods