Search Results for author: Zhecheng Yuan

Found 11 papers, 7 papers with code

RoboScript: Code Generation for Free-Form Manipulation Tasks across Real and Simulation

no code implementations22 Feb 2024 Junting Chen, Yao Mu, Qiaojun Yu, Tianming Wei, Silang Wu, Zhecheng Yuan, Zhixuan Liang, Chao Yang, Kaipeng Zhang, Wenqi Shao, Yu Qiao, Huazhe Xu, Mingyu Ding, Ping Luo

To bridge this ``ideal-to-real'' gap, this paper presents \textbf{RobotScript}, a platform for 1) a deployable robot manipulation pipeline powered by code generation; and 2) a code generation benchmark for robot manipulation tasks in free-form natural language.

Code Generation Common Sense Reasoning +2

Generalizable Visual Reinforcement Learning with Segment Anything Model

1 code implementation28 Dec 2023 Ziyu Wang, Yanjie Ze, Yifei Sun, Zhecheng Yuan, Huazhe Xu

Learning policies that can generalize to unseen environments is a fundamental challenge in visual reinforcement learning (RL).

Data Augmentation reinforcement-learning +1

GenSim: Generating Robotic Simulation Tasks via Large Language Models

1 code implementation2 Oct 2023 Lirui Wang, Yiyang Ling, Zhecheng Yuan, Mohit Shridhar, Chen Bao, Yuzhe Qin, Bailin Wang, Huazhe Xu, Xiaolong Wang

Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data.

Code Generation

RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization

1 code implementation NeurIPS 2023 Zhecheng Yuan, Sizhe Yang, Pu Hua, Can Chang, Kaizhe Hu, Huazhe Xu

Visual Reinforcement Learning (Visual RL), coupled with high-dimensional observations, has consistently confronted the long-standing challenge of out-of-distribution generalization.

Out-of-Distribution Generalization reinforcement-learning

Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning

no code implementations17 Dec 2022 Zhecheng Yuan, Zhengrong Xue, Bo Yuan, Xueqian Wang, Yi Wu, Yang Gao, Huazhe Xu

Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner.

reinforcement-learning Reinforcement Learning (RL)

A Comprehensive Survey of Data Augmentation in Visual Reinforcement Learning

1 code implementation10 Oct 2022 Guozheng Ma, Zhen Wang, Zhecheng Yuan, Xueqian Wang, Bo Yuan, DaCheng Tao

Visual reinforcement learning (RL), which makes decisions directly from high-dimensional visual inputs, has demonstrated significant potential in various domains.

Data Augmentation reinforcement-learning +1

Extraneousness-Aware Imitation Learning

no code implementations4 Oct 2022 Ray Chen Zheng, Kaizhe Hu, Zhecheng Yuan, Boyuan Chen, Huazhe Xu

To tackle this problem, we introduce Extraneousness-Aware Imitation Learning (EIL), a self-supervised approach that learns visuomotor policies from third-person demonstrations with extraneous subsequences.

Imitation Learning

USEEK: Unsupervised SE(3)-Equivariant 3D Keypoints for Generalizable Manipulation

no code implementations28 Sep 2022 Zhengrong Xue, Zhecheng Yuan, Jiashun Wang, Xueqian Wang, Yang Gao, Huazhe Xu

Can a robot manipulate intra-category unseen objects in arbitrary poses with the help of a mere demonstration of grasping pose on a single object instance?

Keypoint Detection Object

Cannot find the paper you are looking for? You can Submit a new open access paper.