Search Results for author: Tengyu Liu

Found 12 papers, 7 papers with code

Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance

1 code implementation26 Mar 2024 Zan Wang, Yixin Chen, Baoxiong Jia, Puhao Li, Jinlu Zhang, Jingze Zhang, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang

Despite significant advancements in text-to-motion synthesis, generating language-guided human motion within 3D environments poses substantial challenges.

Motion Synthesis

AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents

no code implementations19 Mar 2024 Jieming Cui, Tengyu Liu, Nian Liu, Yaodong Yang, Yixin Zhu, Siyuan Huang

Traditional approaches in physics-based motion generation, centered around imitation learning and reward shaping, often struggle to adapt to new scenarios.

Imitation Learning

Scaling Up Dynamic Human-Scene Interaction Modeling

no code implementations13 Mar 2024 Nan Jiang, Zhiyuan Zhang, Hongjie Li, Xiaoxuan Ma, Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Siyuan Huang

Confronting the challenges of data scarcity and advanced motion synthesis in human-scene interaction modeling, we introduce the TRUMANS dataset alongside a novel HSI motion synthesis method.

Motion Synthesis

SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding

no code implementations17 Jan 2024 Baoxiong Jia, Yixin Chen, Huangyue Yu, Yan Wang, Xuesong Niu, Tengyu Liu, Qing Li, Siyuan Huang

In comparison to recent advancements in the 2D domain, grounding language in 3D scenes faces several significant challenges: (i) the inherent complexity of 3D scenes due to the diverse object configurations, their rich attributes, and intricate relationships; (ii) the scarcity of paired 3D vision-language data to support grounded learning; and (iii) the absence of a unified learning framework to distill knowledge from grounded 3D data.

Scene Understanding Visual Grounding

Feedback RoI Features Improve Aerial Object Detection

no code implementations28 Nov 2023 Botao Ren, Botian Xu, Tengyu Liu, Jingyi Wang, Zhidong Deng

Neuroscience studies have shown that the human visual system utilizes high-level feedback information to guide lower-level perception, enabling adaptation to signals of different characteristics.

feature selection Object +2

Grasp Multiple Objects with One Hand

1 code implementation24 Oct 2023 Yuyang Li, Bo Liu, Yiran Geng, Puhao Li, Yaodong Yang, Yixin Zhu, Tengyu Liu, Siyuan Huang

The intricate kinematics of the human hand enable simultaneous grasping and manipulation of multiple objects, essential for tasks such as object transfer and in-hand manipulation.

Object

UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse Proposal Generation and Goal-Conditioned Policy

1 code implementation CVPR 2023 Yinzhen Xu, Weikang Wan, Jialiang Zhang, Haoran Liu, Zikang Shan, Hao Shen, Ruicheng Wang, Haoran Geng, Yijia Weng, Jiayi Chen, Tengyu Liu, Li Yi, He Wang

Trained on our synthesized large-scale dexterous grasp dataset, this model enables us to sample diverse and high-quality dexterous grasp poses for the object point cloud. For the second stage, we propose to replace the motion planning used in parallel gripper grasping with a goal-conditioned grasp policy, due to the complexity involved in dexterous grasping execution.

Motion Planning

Full-Body Articulated Human-Object Interaction

1 code implementation ICCV 2023 Nan Jiang, Tengyu Liu, Zhexuan Cao, Jieming Cui, Zhiyuan Zhang, Yixin Chen, He Wang, Yixin Zhu, Siyuan Huang

By learning the geometrical relationships in HOI, we devise the very first model that leverage human pose estimation to tackle the estimation of articulated object poses and shapes during whole-body interactions.

Action Recognition Human-Object Interaction Detection +3

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

1 code implementation18 Oct 2022 Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang

Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characteristics of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics.

DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General Objects Based on Simulation

no code implementations6 Oct 2022 Ruicheng Wang, Jialiang Zhang, Jiayi Chen, Yinzhen Xu, Puhao Li, Tengyu Liu, He Wang

Robotic dexterous grasping is the first step to enable human-like dexterous object manipulation and thus a crucial robotic technology.

Object

GenDexGrasp: Generalizable Dexterous Grasping

1 code implementation3 Oct 2022 Puhao Li, Tengyu Liu, Yuyang Li, Yiran Geng, Yixin Zhu, Yaodong Yang, Siyuan Huang

By leveraging the contact map as a hand-agnostic intermediate representation, GenDexGrasp efficiently generates diverse and plausible grasping poses with a high success rate and can transfer among diverse multi-fingered robotic hands.

Cannot find the paper you are looking for? You can Submit a new open access paper.