Search Results for author: Lin Shao

Found 26 papers, 10 papers with code

Learning Reward for Robot Skills Using Large Language Models via Self-Alignment

no code implementations12 May 2024 Yuwei Zeng, Yao Mu, Lin Shao

Learning reward functions remains the bottleneck to equip a robot with a broad repertoire of skills.

ManiFoundation Model for General-Purpose Robotic Manipulation of Contact Synthesis with Arbitrary Objects and Robots

no code implementations11 May 2024 Zhixuan Xu, Chongkai Gao, Zixuan Liu, Gang Yang, Chenrui Tie, Haozhuo Zheng, Haoyu Zhou, Weikun Peng, Debang Wang, Tianyi Chen, Zhouliang Yu, Lin Shao

Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation that formalizes a manipulation task as contact synthesis.

Diversity Object

RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation

no code implementations28 Mar 2024 Chongkai Gao, Zhengrong Xue, Shuying Deng, Tianhai Liang, Siqi Yang, Lin Shao, Huazhe Xu

RiEMann learns a manipulation task from scratch with 5 to 10 demonstrations, generalizes to unseen SE(3) transformations and instances of target objects, resists visual interference of distracting objects, and follows the near real-time pose change of the target object.

Imitation Learning Object +3

SoftMAC: Differentiable Soft Body Simulation with Forecast-based Contact Model and Two-way Coupling with Articulated Rigid Bodies and Clothes

no code implementations6 Dec 2023 Min Liu, Gang Yang, Siyuan Luo, Lin Shao

We present SoftMAC, a differentiable simulation framework that couples soft bodies with articulated rigid bodies and clothes.

Diff-Transfer: Model-based Robotic Manipulation Skill Transfer via Differentiable Physics Simulation

no code implementations7 Oct 2023 Yuqi Xiang, Feitong Chen, Qinsi Wang, Yang Gang, Xiang Zhang, Xinghao Zhu, Xingyu Liu, Lin Shao

In this work, we introduce $\textit{Diff-Transfer}$, a novel framework leveraging differentiable physics simulation to efficiently transfer robotic skills.


Generalizable Long-Horizon Manipulations with Large Language Models

no code implementations3 Oct 2023 Haoyu Zhou, Mingyu Ding, Weikun Peng, Masayoshi Tomizuka, Lin Shao, Chuang Gan

This work introduces a framework harnessing the capabilities of Large Language Models (LLMs) to generate primitive task conditions for generalizable long-horizon manipulations with novel objects and unseen tasks.

GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects

1 code implementation28 Sep 2023 Qiaojun Yu, JunBo Wang, Wenhai Liu, Ce Hao, Liu Liu, Lin Shao, Weiming Wang, Cewu Lu

Results show that GAMMA significantly outperforms SOTA articulation modeling and manipulation algorithms in unseen and cross-category articulated objects.

Manner Of Articulation Detection Robot Manipulation +1

Jade: A Differentiable Physics Engine for Articulated Rigid Bodies with Intersection-Free Frictional Contact

no code implementations9 Sep 2023 Gang Yang, Siyuan Luo, Lin Shao

Compared to existing differentiable simulations, Jade offers features including intersection-free collision simulation and stable LCP solutions for multiple frictional contacts.


SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning

no code implementations29 Nov 2021 Jun Lv, Qiaojun Yu, Lin Shao, Wenhai Liu, Wenqiang Xu, Cewu Lu

We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world.

Learning to Regrasp by Learning to Place

1 code implementation18 Sep 2021 Shuo Cheng, Kaichun Mo, Lin Shao

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.

Diversity Object

OmniHang: Learning to Hang Arbitrary Objects using Contact Point Correspondences and Neural Collision Estimation

1 code implementation26 Mar 2021 Yifan You, Lin Shao, Toki Migimatsu, Jeannette Bohg

In this paper, we propose a system that takes partial point clouds of an object and a supporting item as input and learns to decide where and how to hang the object stably.


GRAC: Self-Guided and Self-Regularized Actor-Critic

1 code implementation18 Sep 2020 Lin Shao, Yifan You, Mengyuan Yan, Qingyun Sun, Jeannette Bohg

One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function.

Decision Making OpenAI Gym +2

Generative 3D Part Assembly via Dynamic Graph Learning

3 code implementations NeurIPS 2020 Jialei Huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas Guibas, Hao Dong

Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation.

Graph Learning Graph Neural Network +2

Design and Control of Roller Grasper V2 for In-Hand Manipulation

no code implementations18 Apr 2020 Shenli Yuan, Lin Shao, Connor L. Yako, Alex Gruebele, J. Kenneth Salisbury

The ability to perform in-hand manipulation still remains an unsolved problem; having this capability would allow robots to perform sophisticated tasks requiring repositioning and reorienting of grasped objects.

Imitation Learning

Learning to Scaffold the Development of Robotic Manipulation Skills

no code implementations3 Nov 2019 Lin Shao, Toki Migimatsu, Jeannette Bohg

To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment.

UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands

1 code implementation24 Oct 2019 Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg

The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.

Object valid

Learning Visual Dynamics Models of Rigid Objects using Relational Inductive Biases

1 code implementation9 Sep 2019 Fabio Ferreira, Lin Shao, Tamim Asfour, Jeannette Bohg

The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions.

Inductive Bias

ClusterNet: 3D Instance Segmentation in RGB-D Images

no code implementations24 Jul 2018 Lin Shao, Ye Tian, Jeannette Bohg

We show that our method generalizes well on real-world data achieving visually better segmentation results.

3D Instance Segmentation Clustering +4

Motion-based Object Segmentation based on Dense RGB-D Scene Flow

1 code implementation14 Apr 2018 Lin Shao, Parth Shah, Vikranth Dwaracherla, Jeannette Bohg

Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow.

Motion Segmentation Object +3

Cannot find the paper you are looking for? You can Submit a new open access paper.