no code implementations • 10 Mar 2023 • Yichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, Lin Shao, Wojciech Matusik, Leonidas Guibas
A successful joint-optimized assembly needs to satisfy the bilateral objectives of shape structure and joint alignment.
no code implementations • 27 Oct 2022 • Jun Lv, Yunhai Feng, Cheng Zhang, Shuang Zhao, Lin Shao, Cewu Lu
Model-based reinforcement learning (MBRL) is recognized with the potential to be significantly more sample-efficient than model-free RL.
Deformable Object Manipulation
Model-based Reinforcement Learning
+2
no code implementations • 19 Dec 2021 • Mingxin Yu, Lin Shao, Zhehuan Chen, Tianhao Wu, Qingnan Fan, Kaichun Mo, Hao Dong
Part assembly is a typical but challenging task in robotics, where robots assemble a set of individual parts into a complete shape.
no code implementations • 29 Nov 2021 • Jun Lv, Qiaojun Yu, Lin Shao, Wenhai Liu, Wenqiang Xu, Cewu Lu
We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world.
no code implementations • 18 Sep 2021 • Shuo Cheng, Kaichun Mo, Lin Shao
In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.
1 code implementation • 26 Mar 2021 • Yifan You, Lin Shao, Toki Migimatsu, Jeannette Bohg
In this paper, we propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.
1 code implementation • 18 Sep 2020 • Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg
One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.
3 code implementations • NeurIPS 2020 • Jialei Huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas Guibas, Hao Dong
Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation.
no code implementations • 18 Apr 2020 • Shenli Yuan, Lin Shao, Connor L. Yako, Alex Gruebele, J. Kenneth Salisbury
The ability to perform in-hand manipulation still remains an unsolved problem; having this capability would allow robots to perform sophisticated tasks requiring repositioning and reorienting of grasped objects.
1 code implementation • ECCV 2020 • Yichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, Leonidas Guibas
Autonomous assembly is a crucial capability for robots in many applications.
no code implementations • 3 Nov 2019 • Lin Shao, Toki Migimatsu, Jeannette Bohg
To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment.
1 code implementation • 24 Oct 2019 • Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg
The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.
1 code implementation • 9 Sep 2019 • Fabio Ferreira, Lin Shao, Tamim Asfour, Jeannette Bohg
The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions.
no code implementations • 24 Jul 2018 • Lin Shao, Ye Tian, Jeannette Bohg
We show that our method generalizes well on real-world data achieving visually better segmentation results.
1 code implementation • 14 Apr 2018 • Lin Shao, Parth Shah, Vikranth Dwaracherla, Jeannette Bohg
Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow.
1 code implementation • 17 Oct 2017 • Li Yi, Lin Shao, Manolis Savva, Haibin Huang, Yang Zhou, Qirui Wang, Benjamin Graham, Martin Engelcke, Roman Klokov, Victor Lempitsky, Yuan Gan, Pengyu Wang, Kun Liu, Fenggen Yu, Panpan Shui, Bingyang Hu, Yan Zhang, Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Minki Jeong, Jaehoon Choi, Changick Kim, Angom Geetchandra, Narasimha Murthy, Bhargava Ramu, Bharadwaj Manda, M. Ramanathan, Gautam Kumar, P Preetham, Siddharth Srivastava, Swati Bhugra, Brejesh lall, Christian Haene, Shubham Tulsiani, Jitendra Malik, Jared Lafer, Ramsey Jones, Siyuan Li, Jie Lu, Shi Jin, Jingyi Yu, Qi-Xing Huang, Evangelos Kalogerakis, Silvio Savarese, Pat Hanrahan, Thomas Funkhouser, Hao Su, Leonidas Guibas
We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database.