Search Results for author: Kaichun Mo

Found 23 papers, 13 papers with code

Fixing Malfunctional Objects With Learned Physical Simulation and Functional Prediction

no code implementations5 May 2022 Yining Hong, Kaichun Mo, Li Yi, Leonidas J. Guibas, Antonio Torralba, Joshua B. Tenenbaum, Chuang Gan

Specifically, FixNet consists of a perception module to extract the structured representation from the 3D point cloud, a physical dynamics prediction module to simulate the results of interactions on 3D objects, and a functionality prediction module to evaluate the functionality and choose the correct fix.

GIMO: Gaze-Informed Human Motion Prediction in Context

no code implementations20 Apr 2022 Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, Karen Liu, Leonidas J. Guibas

Our network achieves the top performance in human motion prediction on the proposed dataset, thanks to the intent information from the gaze and the denoised gaze feature modulated by the motion.

Human motion prediction motion prediction

Object Pursuit: Building a Space of Objects via Discriminative Weight Generation

no code implementations ICLR 2022 Chuanyu Pan, Yanchao Yang, Kaichun Mo, Yueqi Duan, Leonidas Guibas

We perform an extensive study of the key features of the proposed framework and analyze the characteristics of the learned representations.

Disentanglement

IFR-Explore: Learning Inter-object Functional Relationships in 3D Indoor Scenes

no code implementations ICLR 2022 Qi Li, Kaichun Mo, Yanchao Yang, Hang Zhao, Leonidas Guibas

While most works focus on single-object or agent-object visual functionality and affordances, our work proposes to study a new kind of visual relationship that is also important to perceive and model -- inter-object functional relationships (e. g., a switch on the wall turns on or off the light, a remote control operates the TV).

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

no code implementations1 Dec 2021 Yian Wang, Ruihai Wu, Kaichun Mo, Jiaqi Ke, Qingnan Fan, Leonidas Guibas, Hao Dong

Perceiving and interacting with 3D articulated objects, such as cabinets, doors, and faucets, pose particular challenges for future home-assistant robots performing daily tasks in human environments.

Learning to Regrasp by Learning to Place

no code implementations18 Sep 2021 Shuo Cheng, Kaichun Mo, Lin Shao

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning

1 code implementation29 Jun 2021 Kaichun Mo, Yuzhe Qin, Fanbo Xiang, Hao Su, Leonidas Guibas

Contrary to the vast literature in modeling, perceiving, and understanding agent-object (e. g., human-object, hand-object, robot-object) interaction in computer vision and robotics, very few past works have studied the task of object-object interaction, which also plays an important role in robotic manipulation and planning tasks.

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

no code implementations ICLR 2022 Ruihai Wu, Yan Zhao, Kaichun Mo, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas Guibas, Hao Dong

In this paper, we propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation, by predicting dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals.

Where2Act: From Pixels to Actions for Articulated 3D Objects

1 code implementation ICCV 2021 Kaichun Mo, Leonidas Guibas, Mustafa Mukadam, Abhinav Gupta, Shubham Tulsiani

One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment.

Compositionally Generalizable 3D Structure Prediction

1 code implementation4 Dec 2020 Songfang Han, Jiayuan Gu, Kaichun Mo, Li Yi, Siyu Hu, Xuejin Chen, Hao Su

However, there remains a much more difficult and under-explored issue on how to generalize the learned skills over unseen object categories that have very different shape geometry distributions.

3D Shape Reconstruction Translation

DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation

1 code implementation12 Aug 2020 Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Lin Gao

While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner.

3D Shape Generation

Generative 3D Part Assembly via Dynamic Graph Learning

2 code implementations NeurIPS 2020 Jialei Huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas Guibas, Hao Dong

Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation.

Graph Learning Pose Estimation +1

Rethinking Sampling in 3D Point Cloud Generative Adversarial Networks

no code implementations12 Jun 2020 He Wang, Zetian Jiang, Li Yi, Kaichun Mo, Hao Su, Leonidas J. Guibas

We further study how different evaluation metrics weigh the sampling pattern against the geometry and propose several perceptual metrics forming a sampling spectrum of metrics.

SAPIEN: A SimulAted Part-based Interactive ENvironment

1 code implementation CVPR 2020 Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, Hao Su

To achieve this task, a simulated environment with physically realistic simulation, sufficient articulated objects, and transferability to the real robot is indispensable.

PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions

1 code implementation ECCV 2020 Kaichun Mo, He Wang, Xinchen Yan, Leonidas J. Guibas

3D generative shape modeling is a fundamental research area in computer vision and interactive computer graphics, with many real-world applications.

3D Shape Generation

StructEdit: Learning Structural Shape Variations

1 code implementation CVPR 2020 Kaichun Mo, Paul Guerrero, Li Yi, Hao Su, Peter Wonka, Niloy Mitra, Leonidas J. Guibas

Learning to encode differences in the geometry and (topological) structure of the shapes of ordinary objects is key to generating semantically plausible variations of a given shape, transferring edits from one shape to another, and many other applications in 3D content creation.

StructureNet: Hierarchical Graph Networks for 3D Shape Generation

2 code implementations1 Aug 2019 Kaichun Mo, Paul Guerrero, Li Yi, Hao Su, Peter Wonka, Niloy Mitra, Leonidas J. Guibas

We introduce StructureNet, a hierarchical graph network which (i) can directly encode shapes represented as such n-ary graphs; (ii) can be robustly trained on large and complex shape families; and (iii) can be used to generate a great diversity of realistic structured shape geometries.

3D Shape Generation

The AdobeIndoorNav Dataset: Towards Deep Reinforcement Learning based Real-world Indoor Robot Visual Navigation

1 code implementation24 Feb 2018 Kaichun Mo, Haoxiang Li, Zhe Lin, Joon-Young Lee

Synthetic data suffers from domain gap to the real-world scenes while visual inputs rendered from 3D reconstructed scenes have undesired holes and artifacts.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.