Search Results for author: Ruihai Wu

Found 12 papers, 3 papers with code

PreAfford: Universal Affordance-Based Pre-Grasping for Diverse Objects and Environments

no code implementations4 Apr 2024 Kairui Ding, Boyuan Chen, Ruihai Wu, Yuyang Li, Zongzheng Zhang, Huan-ang Gao, Siqi Li, Yixin Zhu, Guyue Zhou, Hao Dong, Hao Zhao

Robotic manipulation of ungraspable objects with two-finger grippers presents significant challenges due to the paucity of graspable features, while traditional pre-grasping techniques, which rely on repositioning objects and leveraging external aids like table edges, lack the adaptability across object categories and scenes.

Object

NaturalVLM: Leveraging Fine-grained Natural Language for Affordance-Guided Visual Manipulation

no code implementations13 Mar 2024 ran Xu, Yan Shen, Xiaoqi Li, Ruihai Wu, Hao Dong

To address these challenges, we introduce a comprehensive benchmark, NrVLM, comprising 15 distinct manipulation tasks, containing over 4500 episodes meticulously annotated with fine-grained language instructions.

Robot Manipulation

Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations

no code implementations21 Nov 2023 Yushi Du, Ruihai Wu, Yan Shen, Hao Dong

More importantly, while many methods could only model a certain kind of joint motion (such as the revolution in the clockwise order), our proposed framework is generic to different kinds of joint motions in that transformation matrix can model diverse kinds of joint motions in the space.

Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

no code implementations NeurIPS 2023 Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong

Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects.

Efficient Exploration Few-Shot Learning

Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly

1 code implementation ICCV 2023 Ruihai Wu, Chenrui Tie, Yushi Du, Yan Zhao, Hao Dong

Shape assembly aims to reassemble parts (or fragments) into a complete object, which is a common task in our daily life.

Disentanglement

Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation

no code implementations ICCV 2023 Ruihai Wu, Chuanruo Ning, Hao Dong

In this paper, we study deformable object manipulation using dense visual affordance, with generalization towards diverse states, and propose a novel kind of foresightful dense affordance, which avoids local optima by estimating states' values for long-term manipulation.

Deformable Object Manipulation Object

DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Manipulation

no code implementations5 Jul 2022 Yan Zhao, Ruihai Wu, Zhehuan Chen, Yourong Zhang, Qingnan Fan, Kaichun Mo, Hao Dong

It is essential yet challenging for future home-assistant robots to understand and manipulate diverse 3D objects in daily human environments.

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

no code implementations1 Dec 2021 Yian Wang, Ruihai Wu, Kaichun Mo, Jiaqi Ke, Qingnan Fan, Leonidas Guibas, Hao Dong

Perceiving and interacting with 3D articulated objects, such as cabinets, doors, and faucets, pose particular challenges for future home-assistant robots performing daily tasks in human environments.

Friction

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

no code implementations ICLR 2022 Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong

In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.

TDAPNet: Prototype Network with Recurrent Top-Down Attention for Robust Object Classification under Partial Occlusion

no code implementations9 Sep 2019 Mingqing Xiao, Adam Kortylewski, Ruihai Wu, Siyuan Qiao, Wei Shen, Alan Yuille

Despite deep convolutional neural networks' great success in object classification, it suffers from severe generalization performance drop under occlusion due to the inconsistency between training and testing data.

General Classification Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.