Search Results for author: Mingdong Wu

Found 8 papers, 3 papers with code

Learning Gradient Fields for Scalable and Generalizable Irregular Packing

no code implementations18 Oct 2023 Tianyang Xue, Mingdong Wu, Lin Lu, Haoxuan Wang, Hao Dong, Baoquan Chen

In this work, we delve deeper into a novel machine learning-based approach that formulates the packing problem as conditional generative modeling.

Collision Avoidance Layout Design +1

GraspGF: Learning Score-based Grasping Primitive for Human-assisting Dexterous Grasping

no code implementations12 Sep 2023 Tianhao Wu, Mingdong Wu, Jiyao Zhang, Yunchong Gan, Hao Dong

In this paper, we propose a novel task called human-assisting dexterous grasping that aims to train a policy for controlling a robotic hand's fingers to assist users in grasping objects.

Score-PA: Score-based 3D Part Assembly

1 code implementation8 Sep 2023 Junfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, Hao Dong

In this paper, we formulate this task from a novel generative perspective, introducing the Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly.

GenPose: Generative Category-level Object Pose Estimation via Diffusion Models

no code implementations18 Jun 2023 Jiyao Zhang, Mingdong Wu, Hao Dong

Object pose estimation plays a vital role in embodied AI and computer vision, enabling intelligent agents to comprehend and interact with their surroundings.

6D Pose Estimation 6D Pose Estimation using RGBD +2

GFPose: Learning 3D Human Pose Prior with Gradient Fields

1 code implementation CVPR 2023 Hai Ci, Mingdong Wu, Wentao Zhu, Xiaoxuan Ma, Hao Dong, Fangwei Zhong, Yizhou Wang

During the denoising process, GFPose implicitly incorporates pose priors in gradients and unifies various discriminative and generative tasks in an elegant framework.

Denoising Monocular 3D Human Pose Estimation +1

TarGF: Learning Target Gradient Field to Rearrange Objects without Explicit Goal Specification

no code implementations2 Sep 2022 Mingdong Wu, Fangwei Zhong, Yulong Xia, Hao Dong

For object rearrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the likelihood-change as a reward but also provides suggested actions in residual policy learning.

Imitation Learning Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.