Search Results for author: Lixin Yang

Found 16 papers, 13 papers with code

SemGrasp: Semantic Grasp Generation via Language Aligned Discretization

no code implementations4 Apr 2024 Kailin Li, Jingbo Wang, Lixin Yang, Cewu Lu, Bo Dai

We introduce a discrete representation that aligns the grasp space with semantic space, enabling the generation of grasp postures in accordance with language instructions.

Grasp Generation Language Modelling +2

OAKINK2: A Dataset of Bimanual Hands-Object Manipulation in Complex Task Completion

no code implementations28 Mar 2024 Xinyu Zhan, Lixin Yang, Yifei Zhao, Kangrui Mao, Hanlin Xu, Zenan Lin, Kailin Li, Cewu Lu

Based on the 3-level abstraction of OAKINK2, we explore a task-oriented framework for Complex Task Completion (CTC).

Motion Synthesis Object

CHORD: Category-level Hand-held Object Reconstruction via Shape Deformation

no code implementations ICCV 2023 Kailin Li, Lixin Yang, Haoyu Zhen, Zenan Lin, Xinyu Zhan, Licheng Zhong, Jian Xu, Kejian Wu, Cewu Lu

This can be attributed to the fact that humans have mastered the shape prior of the 'mug' category, and can quickly establish the corresponding relations between different mug instances and the prior, such as where the rim and handle are located.

Object Reconstruction

Color-NeuS: Reconstructing Neural Implicit Surfaces with Color

1 code implementation14 Aug 2023 Licheng Zhong, Lixin Yang, Kailin Li, Haoyu Zhen, Mei Han, Cewu Lu

Mesh is extracted from the signed distance function (SDF) network for the surface, and color for each surface vertex is drawn from the global color network.

HybrIK-X: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery

1 code implementation12 Apr 2023 Jiefeng Li, Siyuan Bian, Chao Xu, Zhicun Chen, Lixin Yang, Cewu Lu

To address these issues, this paper presents a novel hybrid inverse kinematics solution, HybrIK, that integrates the merits of 3D keypoint estimation and body mesh recovery in a unified framework.

3D Human Pose Estimation 3D Human Reconstruction +1

POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo

1 code implementation CVPR 2023 Lixin Yang, Jian Xu, Licheng Zhong, Xinyu Zhan, Zhicheng Wang, Kejian Wu, Cewu Lu

Enable neural networks to capture 3D geometrical-aware features is essential in multi-view based vision tasks.

DART: Articulated Hand Model with Diverse Accessories and Rich Textures

1 code implementation14 Oct 2022 Daiheng Gao, Yuliang Xiu, Kailin Li, Lixin Yang, Feng Wang, Peng Zhang, Bang Zhang, Cewu Lu, Ping Tan

Unity GUI is also provided to generate synthetic hand data with user-defined settings, e. g., pose, camera, background, lighting, textures, and accessories.

Hand Pose Estimation Unity

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

1 code implementation CVPR 2022 Lixin Yang, Kailin Li, Xinyu Zhan, Fei Wu, Anran Xu, Liu Liu, Cewu Lu

We start to collect 1, 800 common household objects and annotate their affordances to construct the first knowledge base: Oak.

Grasp Generation Object +1

Learning Universal Shape Dictionary for Realtime Instance Segmentation

1 code implementation2 Dec 2020 Tutian Tang, Wenqiang Xu, Ruolin Ye, Lixin Yang, Cewu Lu

First, it learns a dictionary from a large collection of shape datasets, making any shape being able to be decomposed into a linear combination through the dictionary.

Explainable Models Instance Segmentation +3

CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction

1 code implementation ICCV 2021 Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, Cewu Lu

In this paper, we present an explicit contact representation namely Contact Potential Field (CPF), and a learning-fitting hybrid framework namely MIHO to Modeling the Interaction of Hand and Object.

Object Pose Estimation

HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation

3 code implementations CVPR 2021 Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu

We show that HybrIK preserves both the accuracy of 3D pose and the realistic body structure of the parametric human model, leading to a pixel-aligned 3D body mesh and a more accurate 3D pose than the pure 3D keypoint estimation methods.

3D human pose and shape estimation Keypoint Estimation

BiHand: Recovering Hand Mesh with Multi-stage Bisected Hourglass Networks

1 code implementation12 Aug 2020 Lixin Yang, Jiasen Li, Wenqiang Xu, Yiqun Diao, Cewu Lu

Inside each stage, BiHand adopts a novel bisecting design which allows the networks to encapsulate two closely related information (e. g. 2D keypoints and silhouette in 2D seeding stage, 3D joints, and depth map in 3D lifting stage, joint rotations and shape parameters in the mesh generation stage) in a single forward pass.

Pose Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.