no code implementations • 25 Jun 2024 • Zhen Wu, Jiaman Li, C. Karen Liu
Our experiments demonstrate the effectiveness of our high-level planner in generating plausible target layouts and our low-level motion generator in synthesizing realistic interactions for diverse objects.
Common Sense Reasoning Human-Object Interaction Detection +4
no code implementations • 14 Jun 2024 • Lingni Ma, Yuting Ye, Fangzhou Hong, Vladimir Guzov, Yifeng Jiang, Rowan Postyeni, Luis Pesqueira, Alexander Gamino, Vijay Baiyya, Hyo Jin Kim, Kevin Bailey, David Soriano Fosas, C. Karen Liu, Ziwei Liu, Jakob Engel, Renzo De Nardi, Richard Newcombe
To the best of our knowledge, the Nymeria dataset is the world largest in-the-wild collection of human motion with natural and diverse activities; first of its kind to provide synchronized and localized multi-device multimodal egocentric data; and the world largest dataset with motion-language descriptions.
no code implementations • 16 May 2024 • Keenon Werling, Janelle Kaneda, Alan Tan, Rishi Agarwal, Six Skov, Tom Van Wouwe, Scott Uhlrich, Nicholas Bianco, Carmichael Ong, Antoine Falisse, Shardul Sapkota, Aidan Chandra, Joshua Carter, Ezio Preatoni, Benjamin Fregly, Jennifer Hicks, Scott Delp, C. Karen Liu
While reconstructing human poses in 3D from inexpensive sensors has advanced significantly in recent years, quantifying the dynamics of human motion, including the muscle-generated joint torques and external forces, remains a challenge.
no code implementations • 27 Mar 2024 • Weizhuo Wang, C. Karen Liu, Monroe Kennedy III
Wearable collaborative robots stand to assist human wearers who need fall prevention assistance or wear exoskeletons.
no code implementations • 14 Mar 2024 • Chengshu Li, Ruohan Zhang, Josiah Wong, Cem Gokmen, Sanjana Srivastava, Roberto Martín-Martín, Chen Wang, Gabrael Levine, Wensi Ai, Benjamin Martinez, Hang Yin, Michael Lingelbach, Minjune Hwang, Ayano Hiranaka, Sujay Garlanka, Arman Aydin, Sharon Lee, Jiankai Sun, Mona Anvari, Manasi Sharma, Dhruva Bansal, Samuel Hunter, Kyu-Young Kim, Alan Lou, Caleb R Matthews, Ivan Villa-Renteria, Jerry Huayang Tang, Claire Tang, Fei Xia, Yunzhu Li, Silvio Savarese, Hyowon Gweon, C. Karen Liu, Jiajun Wu, Li Fei-Fei
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered robotics.
no code implementations • 12 Mar 2024 • Chen Wang, Haochen Shi, Weizhuo Wang, Ruohan Zhang, Li Fei-Fei, C. Karen Liu
Imitation learning from human hand motion data presents a promising avenue for imbuing robots with human-like dexterity in real-world manipulation tasks.
no code implementations • 15 Dec 2023 • Purvi Goel, Kuan-Chieh Wang, C. Karen Liu, Kayvon Fatahalian
Text-to-motion diffusion models can generate realistic animations from text prompts, but do not support fine-grained motion editing controls.
no code implementations • 6 Dec 2023 • Jiaman Li, Alexander Clegg, Roozbeh Mottaghi, Jiajun Wu, Xavier Puig, C. Karen Liu
We demonstrate that our learned interaction module can synthesize realistic human-object interactions, adhering to provided textual descriptions and sparse waypoint conditions.
no code implementations • 1 Nov 2023 • Ziang Liu, Stephen Tian, Michelle Guo, C. Karen Liu, Jiajun Wu
A designer policy is conditioned on task information and outputs a tool design that helps solve the task.
no code implementations • 11 Oct 2023 • Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T. Barron, Amit H. Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, C. Karen Liu, Lingjie Liu, Ben Mildenhall, Matthias Nießner, Björn Ommer, Christian Theobalt, Peter Wonka, Gordon Wetzstein
The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes.
no code implementations • 28 Sep 2023 • Jiaman Li, Jiajun Wu, C. Karen Liu
We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion.
no code implementations • 24 Sep 2023 • Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
no code implementations • 2 Sep 2023 • Yuanpei Chen, Chen Wang, Li Fei-Fei, C. Karen Liu
However, the challenges arise due to the high-dimensional action space of dexterous hand and complex compositional dynamics of the long-horizon tasks.
no code implementations • CVPR 2024 • Tom Van Wouwe, Seunghwan Lee, Antoine Falisse, Scott Delp, C. Karen Liu
Unlike existing methods, our model grants users the flexibility to determine the number and arrangement of sensors tailored to the specific activity of interest, without the need for retraining.
1 code implementation • CVPR 2023 • Joao Pedro Araujo, Jiaman Li, Karthik Vetrivel, Rishi Agarwal, Deepak Gopinath, Jiajun Wu, Alexander Clegg, C. Karen Liu
Leveraging our dataset, the model learns to use ego-centric scene information to achieve nontrivial reaching tasks in the context of complex 3D scenes.
no code implementations • 4 Jan 2023 • Sifan Ye, Yixing Wang, Jiaman Li, Dennis Park, C. Karen Liu, Huazhe Xu, Jiajun Wu
Large-scale capture of human motion with diverse, complex scenes, while immensely useful, is often considered prohibitively costly.
Ranked #3 on 3D Semantic Scene Completion on PRO-teXt
2D Semantic Segmentation task 1 (8 classes) 3D Semantic Scene Completion +1
1 code implementation • 28 Dec 2022 • Kuan-Chieh Wang, Zhenzhen Weng, Maria Xenochristou, Joao Pedro Araujo, Jeffrey Gu, C. Karen Liu, Serena Yeung
Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection.
no code implementations • 9 Dec 2022 • Ziyuan Huang, Zhengping Zhou, Yung-Yu Chuang, Jiajun Wu, C. Karen Liu
We present a new method for generating controllable, dynamically responsive, and photorealistic human animations.
1 code implementation • CVPR 2023 • Jiaman Li, C. Karen Liu, Jiajun Wu
In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments.
1 code implementation • CVPR 2023 • Jonathan Tseng, Rodrigo Castellon, C. Karen Liu
Dance is an important human art form, but creating new dances can be difficult and time-consuming.
Ranked #1 on Motion Synthesis on AIST++ (Beat alignment score metric)
1 code implementation • 20 Apr 2022 • Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, C. Karen Liu, Leonidas J. Guibas
We perform an extensive study of the benefits of leveraging the eye gaze for ego-centric human motion prediction with various state-of-the-art architectures.
1 code implementation • 29 Mar 2022 • Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W. Winkler, C. Karen Liu
Real-time human motion reconstruction from a sparse set of (e. g. six) wearable IMUs provides a non-intrusive and economic approach to motion capture.
no code implementations • 7 Mar 2022 • Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, C. Karen Liu, Julien Pettré, Michiel Van de Panne, Marie-Paule Cani
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions, and achieve a particular goal within an arbitrary environment.
no code implementations • CVPR 2022 • Hanxiang Ren, Yanchao Yang, He Wang, Bokui Shen, Qingnan Fan, Youyi Zheng, C. Karen Liu, Leonidas J. Guibas
We describe a method to deal with performance drop in semantic segmentation caused by viewpoint changes within multi-camera systems, where temporally paired images are readily available, but the annotations may only be abundant for a few typical views.
1 code implementation • 13 Aug 2021 • Chen Wang, Claudia Pérez-D'Arpino, Danfei Xu, Li Fei-Fei, C. Karen Liu, Silvio Savarese
Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator.
no code implementations • 6 Aug 2021 • Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation.
1 code implementation • 6 Aug 2021 • Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese
We evaluate the new capabilities of iGibson 2. 0 to enable robot learning of novel tasks, in the hope of demonstrating the potential of this new simulator to support new research in embodied AI.
1 code implementation • 29 Jul 2021 • Yanchao Yang, Hanxiang Ren, He Wang, Bokui Shen, Qingnan Fan, Youyi Zheng, C. Karen Liu, Leonidas Guibas
Furthermore, to resolve ambiguities in converting the semantic images to semantic labels, we treat the view transformation network as a functional representation of an unknown mapping implied by the color images and propose functional label hallucination to generate pseudo-labels in the target domain.
2 code implementations • 27 Jul 2021 • Yuefan Shen, Yanchao Yang, Youyi Zheng, C. Karen Liu, Leonidas Guibas
We describe a method for unpaired realistic depth synthesis that learns diverse variations from the real-world depth scans and ensures geometric consistency between the synthetic and synthesized depth.
1 code implementation • 30 Mar 2021 • Keenon Werling, Dalton Omens, Jeongseok Lee, Ioannis Exarchos, C. Karen Liu
We present a fast and feature-complete differentiable physics engine, Nimble (nimblephysics. org), that supports Lagrangian dynamics and hard contact constraints for articulated rigid body simulation.
no code implementations • 13 Mar 2021 • Visak Kumar, Sehoon Ha, C. Karen Liu
An EAP takes as input the predicted future state error in the target environment, which is provided by an error-prediction function, simultaneously trained with the EAP.
1 code implementation • 8 Mar 2021 • Ioannis Exarchos, Brian H. Do, Fabio Stroppa, Margaret M. Coad, Allison M. Okamura, C. Karen Liu
Soft robot serial chain manipulators with the capability for growth, stiffness control, and discrete joints have the potential to approach the dexterity of traditional robot arms, while improving safety, lowering cost, and providing an increased workspace, with potential application in home environments.
Robotics
no code implementations • 3 Mar 2021 • Yunbo Zhang, Wenhao Yu, C. Karen Liu, Charles C. Kemp, Greg Turk
We produce a final animation by using inverse kinematics to guide a character's arm and hand to match the motion of the manipulation tool such as a knife or a frying pan.
no code implementations • 11 Dec 2020 • Wenhao Yu, C. Karen Liu, Greg Turk
When used with a set of thresholds, the safety estimator becomes a classifier for switching between the protective policy and the task policy.
no code implementations • 7 Dec 2020 • Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Florian Golemo, Melissa Mozifian, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, Martha White
This report presents the debates, posters, and discussions of the Sim2Real workshop held in conjunction with the 2020 edition of the "Robotics: Science and System" conference.
no code implementations • 23 Nov 2020 • Zhuo Xu, Wenhao Yu, Alexander Herzog, Wenlong Lu, Chuyuan Fu, Masayoshi Tomizuka, Yunfei Bai, C. Karen Liu, Daniel Ho
General contact-rich manipulation problems are long-standing challenges in robotics due to the difficulty of understanding complicated contact physics.
1 code implementation • 3 Nov 2020 • Ioannis Exarchos, Yifeng Jiang, Wenhao Yu, C. Karen Liu
Transferring reinforcement learning policies trained in physics simulation to the real hardware remains a challenge, known as the "sim-to-real" gap.
1 code implementation • 22 Sep 2020 • Amin Babadi, Michiel Van de Panne, C. Karen Liu, Perttu Hämäläinen
We propose a novel method for exploring the dynamics of physically based animated characters, and learning a task-agnostic action space that makes movement optimization easier.
1 code implementation • CVPR 2020 • Henry M. Clever, Zackory Erickson, Ariel Kapusta, Greg Turk, C. Karen Liu, Charles C. Kemp
We describe a physics-based method that simulates human bodies at rest in a bed with a pressure sensing mat, and present PressurePose, a synthetic dataset with 206K pressure images with 3D human poses and shapes.
3D human pose and shape estimation 3D Human Shape Estimation +1
4 code implementations • 10 Oct 2019 • Zackory Erickson, Vamsee Gangaram, Ariel Kapusta, C. Karen Liu, Charles C. Kemp
Assistive Gym models a person's physical capabilities and preferences for assistance, which are used to provide a reward function.
no code implementations • 17 Sep 2019 • Perttu Hämäläinen, Juuso Toikka, Amin Babadi, C. Karen Liu
A large body of animation research focuses on optimization of movement control, either as action sequences or policy parameters.
no code implementations • 14 Sep 2019 • Alexander Clegg, Zackory Erickson, Patrick Grady, Greg Turk, Charles C. Kemp, C. Karen Liu
We investigated the application of haptic feedback control and deep reinforcement learning (DRL) to robot-assisted dressing.
no code implementations • 9 Jul 2019 • K. Niranjan Kumar, Irfan Essa, Sehoon Ha, C. Karen Liu
Using our method, we train a robotic arm to estimate the mass distribution of an object with moving parts (e. g. an articulated rigid body system) by pushing it on a surface with unknown friction properties.
2 code implementations • 30 Apr 2019 • Yifeng Jiang, Tom Van Wouwe, Friedl De Groote, C. Karen Liu
In addition, the metabolic energy function on muscle activations is transformed to a nonlinear function of joint torques, joint configuration and joint velocity.
no code implementations • 4 Mar 2019 • Wenhao Yu, Visak CV Kumar, Greg Turk, C. Karen Liu
We present a new approach for transfer of dynamic robot control policies such as biped locomotion from simulation to real hardware.
1 code implementation • ICLR 2019 • Wenhao Yu, C. Karen Liu, Greg Turk
Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification.
no code implementations • 11 Mar 2018 • Yifeng Jiang, Jiazheng Sun, C. Karen Liu
Accurately modeling contact behaviors for real-world, near-rigid materials remains a grand challenge for existing rigid-body physics simulators.
2 code implementations • 24 Jan 2018 • Wenhao Yu, Greg Turk, C. Karen Liu
Indeed, a standard benchmark for DRL is to automatically create a running controller for a biped character from a simple reward function.
no code implementations • 27 Sep 2017 • Zackory Erickson, Henry M. Clever, Greg Turk, C. Karen Liu, Charles C. Kemp
The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body.
no code implementations • 23 Sep 2017 • Wenhao Yu, C. Karen Liu, Greg Turk
Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks.
no code implementations • 8 Mar 2017 • Visak CV Kumar, Sehoon Ha, C. Karen Liu
With this mixture of actor-critic architecture, the discrete contact sequence planning is solved through the selection of the best critics while the continuous control problem is solved by the optimization of actors.
1 code implementation • 8 Feb 2017 • Wenhao Yu, Jie Tan, C. Karen Liu, Greg Turk
Together, UP-OSI is a robust control policy that can be used across a wide range of dynamic models, and that is also responsive to sudden changes in the environment.
1 code implementation • 9 Sep 2016 • Jeongseok Lee, C. Karen Liu, Frank C. Park, Siddhartha S. Srinivasa
Our key contribution is to derive a recursive algorithm that evaluates DEL equations in $O(n)$, which scales up well for complex multibody systems such as humanoid robots.
Robotics
no code implementations • 12 Aug 2016 • Michael X. Grey, Aaron D. Ames, C. Karen Liu
Locomotion for legged robots poses considerable challenges when confronted by obstacles and adverse environments.