In this paper we present our preliminary work on model-based behavioral analysis of horse motion.
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild.
Ranked #1 on 3D Human Reconstruction on Expressive hands and faces dataset (EHF) (TR V2V (mm), left hand metric)
We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model; here we use SMPL [doi:10. 1145/2816795. 2818013], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh.