no code implementations • 16 Oct 2023 • Deok-Kyeong Jang, Yuting Ye, Jungdam Won, Sung-Hee Lee
Central to our framework is the Neural Context Matcher, which generates a motion feature for the target character with the most similar context to the input motion feature.
no code implementations • 24 Sep 2023 • Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
no code implementations • 4 Jul 2023 • Daniele Reda, Jungdam Won, Yuting Ye, Michiel Van de Panne, Alexander Winkler
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
no code implementations • 9 Jun 2023 • Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, Alexander Winkler
Most existing methods for motion tracking avoid environment interaction apart from foot-floor contact due to their complex dynamics and hard constraints.
1 code implementation • 7 Jun 2023 • Jungnam Park, Moon Seok Park, Jehee Lee, Jungdam Won
We present a novel generative model, called Bidirectional GaitNet, that learns the relationship between human anatomy and its gait.
no code implementations • 5 May 2023 • Jinseok Bae, Jungdam Won, Donggeun Lim, Cheol-Hui Min, Young Min Kim
The proposed PMP allows us to assemble multiple part skills to animate a character, creating a diverse set of motions with different combinations of existing data.
no code implementations • ICCV 2023 • Mingyi Shi, Sebastian Starke, Yuting Ye, Taku Komura, Jungdam Won
We present a novel motion prior, called PhaseMP, modeling a probability distribution on pose transitions conditioned by a frequency domain feature extracted from a periodic autoencoder.
no code implementations • ICCV 2023 • Yijun Qian, Jack Urbanek, Alexander G. Hauptmann, Jungdam Won
Given its wide applications, there is increasing focus on generating 3D human motions from textual descriptions.
1 code implementation • 26 Oct 2022 • Jonas Gehring, Deepak Gopinath, Jungdam Won, Andreas Krause, Gabriel Synnaeve, Nicolas Usunier
Starting with a learned joint latent space, we separately train a generative model of demonstration sequences and an accompanying low-level policy.
no code implementations • 20 Sep 2022 • Alexander Winkler, Jungdam Won, Yuting Ye
Real-time tracking of human body motion is crucial for interactive and immersive experiences in AR/VR.
1 code implementation • 29 Mar 2022 • Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Winkler, C. Karen Liu
Real-time human motion reconstruction from a sparse set of (e. g. six) wearable IMUs provides a non-intrusive and economic approach to motion capture.
1 code implementation • 9 Feb 2022 • Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, Sungjoon Choi
Motion in-betweening (MIB) is a process of generating intermediate skeletal movement between the given start and target poses while preserving the naturalness of the motion, such as periodic footstep motion while walking.