Search Results for author: Jungdam Won

Found 12 papers, 4 papers with code

MOCHA: Real-Time Motion Characterization via Context Matching

no code implementations16 Oct 2023 Deok-Kyeong Jang, Yuting Ye, Jungdam Won, Sung-Hee Lee

Central to our framework is the Neural Context Matcher, which generates a motion feature for the target character with the most similar context to the input motion feature.

DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics

no code implementations24 Sep 2023 Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu

We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.

Data Augmentation motion prediction +1

Physics-based Motion Retargeting from Sparse Inputs

no code implementations4 Jul 2023 Daniele Reda, Jungdam Won, Yuting Ye, Michiel Van de Panne, Alexander Winkler

We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.

motion retargeting

QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse Sensors

no code implementations9 Jun 2023 Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, Alexander Winkler

Most existing methods for motion tracking avoid environment interaction apart from foot-floor contact due to their complex dynamics and hard constraints.

Bidirectional GaitNet: A Bidirectional Prediction Model of Human Gait and Anatomical Conditions

1 code implementation7 Jun 2023 Jungnam Park, Moon Seok Park, Jehee Lee, Jungdam Won

We present a novel generative model, called Bidirectional GaitNet, that learns the relationship between human anatomy and its gait.

Anatomy

PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors

no code implementations5 May 2023 Jinseok Bae, Jungdam Won, Donggeun Lim, Cheol-Hui Min, Young Min Kim

The proposed PMP allows us to assemble multiple part skills to animate a character, creating a diverse set of motions with different combinations of existing data.

PhaseMP: Robust 3D Pose Estimation via Phase-conditioned Human Motion Prior

no code implementations ICCV 2023 Mingyi Shi, Sebastian Starke, Yuting Ye, Taku Komura, Jungdam Won

We present a novel motion prior, called PhaseMP, modeling a probability distribution on pose transitions conditioned by a frequency domain feature extracted from a periodic autoencoder.

3D Pose Estimation Motion Estimation

Leveraging Demonstrations with Latent Space Priors

1 code implementation26 Oct 2022 Jonas Gehring, Deepak Gopinath, Jungdam Won, Andreas Krause, Gabriel Synnaeve, Nicolas Usunier

Starting with a learned joint latent space, we separately train a generative model of demonstration sequences and an accompanying low-level policy.

Offline RL

QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars

no code implementations20 Sep 2022 Alexander Winkler, Jungdam Won, Yuting Ye

Real-time tracking of human body motion is crucial for interactive and immersive experiences in AR/VR.

valid

Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation

1 code implementation29 Mar 2022 Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Winkler, C. Karen Liu

Real-time human motion reconstruction from a sparse set of (e. g. six) wearable IMUs provides a non-intrusive and economic approach to motion capture.

Motion Estimation

Conditional Motion In-betweening

1 code implementation9 Feb 2022 Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, Sungjoon Choi

Motion in-betweening (MIB) is a process of generating intermediate skeletal movement between the given start and target poses while preserving the naturalness of the motion, such as periodic footstep motion while walking.

Pose Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.