Search Results for author: Jiaman Li

Found 14 papers, 5 papers with code

Controllable Human-Object Interaction Synthesis

no code implementations6 Dec 2023 Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, C. Karen Liu

Naively applying a diffusion model fails to predict object motion aligned with the input waypoints and cannot ensure the realism of interactions that require precise hand-object contact and appropriate contact grounded by the floor.

Human-Object Interaction Detection Object

Object Motion Guided Human Motion Synthesis

no code implementations28 Sep 2023 Jiaman Li, Jiajun Wu, C. Karen Liu

We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion.

Denoising Human-Object Interaction Detection +2

Motion Question Answering via Modular Motion Programs

1 code implementation15 May 2023 Mark Endo, Joy Hsu, Jiaman Li, Jiajun Wu

In order to build artificial intelligence systems that can perceive and reason with human behavior in the real world, we must first design models that conduct complex spatio-temporal reasoning over motion sequences.

Attribute Question Answering

CIRCLE: Capture In Rich Contextual Environments

1 code implementation CVPR 2023 Joao Pedro Araujo, Jiaman Li, Karthik Vetrivel, Rishi Agarwal, Deepak Gopinath, Jiajun Wu, Alexander Clegg, C. Karen Liu

Leveraging our dataset, the model learns to use ego-centric scene information to achieve nontrivial reaching tasks in the context of complex 3D scenes.

Ego-Body Pose Estimation via Ego-Head Pose Estimation

no code implementations CVPR 2023 Jiaman Li, C. Karen Liu, Jiajun Wu

In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments.

Benchmarking Disentanglement +1

GIMO: Gaze-Informed Human Motion Prediction in Context

1 code implementation20 Apr 2022 Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, C. Karen Liu, Leonidas J. Guibas

We perform an extensive study of the benefits of leveraging the eye gaze for ego-centric human motion prediction with various state-of-the-art architectures.

Human motion prediction motion prediction

DenseGAP: Graph-Structured Dense Correspondence Learning with Anchor Points

no code implementations13 Dec 2021 Zhengfei Kuang, Jiaman Li, Mingming He, Tong Wang, Yajie Zhao

To make the local features aware of the global context and improve their matching accuracy, we introduce DenseGAP, a new solution for efficient Dense correspondence learning with a Graph-structured neural network conditioned on Anchor Points.

Feature Correlation

Task-Generic Hierarchical Human Motion Prior using VAEs

no code implementations7 Jun 2021 Jiaman Li, Ruben Villegas, Duygu Ceylan, Jimei Yang, Zhengfei Kuang, Hao Li, Yajie Zhao

We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation, motion completion from partial observations, and motion synthesis from sparse key-frames.

Motion Synthesis Pose Estimation

Dynamic Facial Asset and Rig Generation from a Single Scan

no code implementations1 Oct 2020 Jiaman Li, Zheng-Fei Kuang, Yajie Zhao, Mingming He, Karl Bladin, Hao Li

We also model the joint distribution between identities and expressions, enabling the inference of the full set of personalized blendshapes with dynamic appearances from a single neutral input scan.

Learning to Generate Diverse Dance Motions with Transformer

no code implementations18 Aug 2020 Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, Hao Li

We also introduce new evaluation metrics for the quality of synthesized dance motions, and demonstrate that our system can outperform state-of-the-art methods.

Motion Synthesis

VirtualHome: Simulating Household Activities via Programs

4 code implementations CVPR 2018 Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, Antonio Torralba

We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to "drive" an artificial agent to execute tasks in a simulated household environment.

Video Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.