Search Results for author: Ruihai Wu

Found 16 papers, 4 papers with code

ET-SEED: Efficient Trajectory-Level SE(3) Equivariant Diffusion Policy

no code implementations6 Nov 2024 Chenrui Tie, Yue Chen, Ruihai Wu, Boxuan Dong, Zeyi Li, Chongkai Gao, Hao Dong

We theoretically extend equivariant Markov kernels and simplify the condition of equivariant diffusion process, thereby significantly improving training efficiency for trajectory-level SE(3) equivariant diffusion policy in an end-to-end manner.

Imitation Learning Robot Manipulation

GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation

1 code implementation2 Nov 2024 Haoran Lu, Ruihai Wu, Yitong Li, Sijie Li, Ziyu Zhu, Chuanruo Ning, Yan Shen, Longzan Luo, Yuanpei Chen, Hao Dong

Recent successes in reinforcement learning and vision-based methods offer promising avenues for learning garment manipulation.

Imitation Learning

EqvAfford: SE(3) Equivariance for Point-Level Affordance Learning

no code implementations4 Aug 2024 Yue Chen, Chenrui Tie, Ruihai Wu, Hao Dong

Humans perceive and interact with the world with the awareness of equivariance, facilitating us in manipulating different objects in diverse poses.

UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence

no code implementations CVPR 2024 Ruihai Wu, Haoran Lu, Yiyan Wang, YuBo Wang, Hao Dong

Garment manipulation (e. g., unfolding, folding and hanging clothes) is essential for future robots to accomplish home-assistant tasks, while highly challenging due to the diversity of garment configurations, geometries and deformations.

Diversity

NaturalVLM: Leveraging Fine-grained Natural Language for Affordance-Guided Visual Manipulation

no code implementations13 Mar 2024 ran Xu, Yan Shen, Xiaoqi Li, Ruihai Wu, Hao Dong

To address these challenges, we introduce a comprehensive benchmark, NrVLM, comprising 15 distinct manipulation tasks, containing over 4500 episodes meticulously annotated with fine-grained language instructions.

Robot Manipulation

RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation

1 code implementation23 Feb 2024 Hanxiao Jiang, Binghao Huang, Ruihai Wu, Zhuoran Li, Shubham Garg, Hooshang Nayyeri, Shenlong Wang, Yunzhu Li

We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment.

Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations

no code implementations21 Nov 2023 Yushi Du, Ruihai Wu, Yan Shen, Hao Dong

More importantly, while many methods could only model a certain kind of joint motion (such as the revolution in the clockwise order), our proposed framework is generic to different kinds of joint motions in that transformation matrix can model diverse kinds of joint motions in the space.

Where2Explore: Few-shot Affordance Learning for Unseen Novel Categories of Articulated Objects

no code implementations NeurIPS 2023 Chuanruo Ning, Ruihai Wu, Haoran Lu, Kaichun Mo, Hao Dong

Our framework explicitly estimates the geometric similarity across different categories, identifying local areas that differ from shapes in the training categories for efficient exploration while concurrently transferring affordance knowledge to similar parts of the objects.

Efficient Exploration Few-Shot Learning

Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly

1 code implementation ICCV 2023 Ruihai Wu, Chenrui Tie, Yushi Du, Yan Zhao, Hao Dong

Shape assembly aims to reassemble parts (or fragments) into a complete object, which is a common task in our daily life.

Disentanglement

Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation

no code implementations ICCV 2023 Ruihai Wu, Chuanruo Ning, Hao Dong

In this paper, we study deformable object manipulation using dense visual affordance, with generalization towards diverse states, and propose a novel kind of foresightful dense affordance, which avoids local optima by estimating states' values for long-term manipulation.

Deformable Object Manipulation Object

DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Manipulation

no code implementations5 Jul 2022 Yan Zhao, Ruihai Wu, Zhehuan Chen, Yourong Zhang, Qingnan Fan, Kaichun Mo, Hao Dong

It is essential yet challenging for future home-assistant robots to understand and manipulate diverse 3D objects in daily human environments.

3D geometry

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

no code implementations1 Dec 2021 Yian Wang, Ruihai Wu, Kaichun Mo, Jiaqi Ke, Qingnan Fan, Leonidas Guibas, Hao Dong

Perceiving and interacting with 3D articulated objects, such as cabinets, doors, and faucets, pose particular challenges for future home-assistant robots performing daily tasks in human environments.

Friction

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

no code implementations ICLR 2022 Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong

In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.

TDAPNet: Prototype Network with Recurrent Top-Down Attention for Robust Object Classification under Partial Occlusion

no code implementations9 Sep 2019 Mingqing Xiao, Adam Kortylewski, Ruihai Wu, Siyuan Qiao, Wei Shen, Alan Yuille

Despite deep convolutional neural networks' great success in object classification, it suffers from severe generalization performance drop under occlusion due to the inconsistency between training and testing data.

General Classification Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.