Search Results for author: Lin Shao

Found 14 papers, 6 papers with code

RoboAssembly: Learning Generalizable Furniture Assembly Policy in a Novel Multi-robot Contact-rich Simulation Environment

no code implementations19 Dec 2021 Mingxin Yu, Lin Shao, Zhehuan Chen, Tianhao Wu, Qingnan Fan, Kaichun Mo, Hao Dong

Part assembly is a typical but challenging task in robotics, where robots assemble a set of individual parts into a complete shape.

SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning

no code implementations29 Nov 2021 Jun Lv, Qiaojun Yu, Lin Shao, Wenhai Liu, Wenqiang Xu, Cewu Lu

We apply our system to perform articulated object manipulation, both in the simulation and the real world.

Learning to Regrasp by Learning to Place

no code implementations18 Sep 2021 Shuo Cheng, Kaichun Mo, Lin Shao

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.

OmniHang: Learning to Hang Arbitrary Objects using Contact Point Correspondences and Neural Collision Estimation

1 code implementation26 Mar 2021 Yifan You, Lin Shao, Toki Migimatsu, Jeannette Bohg

In this paper, we propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.

GRAC: Self-Guided and Self-Regularized Actor-Critic

1 code implementation18 Sep 2020 Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg

One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.

Decision Making OpenAI Gym

Generative 3D Part Assembly via Dynamic Graph Learning

2 code implementations NeurIPS 2020 Jialei Huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas Guibas, Hao Dong

Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation.

Graph Learning Pose Estimation +1

Design and Control of Roller Grasper V2 for In-Hand Manipulation

no code implementations18 Apr 2020 Shenli Yuan, Lin Shao, Connor L. Yako, Alex Gruebele, J. Kenneth Salisbury

The ability to perform in-hand manipulation still remains an unsolved problem; having this capability would allow robots to perform sophisticated tasks requiring repositioning and reorienting of grasped objects.

Imitation Learning

Learning to Scaffold the Development of Robotic Manipulation Skills

no code implementations3 Nov 2019 Lin Shao, Toki Migimatsu, Jeannette Bohg

To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment.

UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands

no code implementations24 Oct 2019 Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg

The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.

Learning Visual Dynamics Models of Rigid Objects using Relational Inductive Biases

1 code implementation9 Sep 2019 Fabio Ferreira, Lin Shao, Tamim Asfour, Jeannette Bohg

The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions.

ClusterNet: 3D Instance Segmentation in RGB-D Images

no code implementations24 Jul 2018 Lin Shao, Ye Tian, Jeannette Bohg

We show that our method generalizes well on real-world data achieving visually better segmentation results.

3D Instance Segmentation Decision Making +1

Motion-based Object Segmentation based on Dense RGB-D Scene Flow

1 code implementation14 Apr 2018 Lin Shao, Parth Shah, Vikranth Dwaracherla, Jeannette Bohg

Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow.

Motion Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.