Search Results for author: Zhou Xian

Found 12 papers, 2 papers with code

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

no code implementations6 Feb 2024 YuFei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Biyik, David Held, Zackory Erickson

Reward engineering has long been a challenge in Reinforcement Learning (RL) research, as it often requires extensive human effort and iterative processes of trial-and-error to design effective reward functions.

reinforcement-learning Reinforcement Learning (RL)

RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

no code implementations2 Nov 2023 YuFei Wang, Zhou Xian, Feng Chen, Tsun-Hsuan Wang, Yian Wang, Zackory Erickson, David Held, Chuang Gan

We present RoboGen, a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.

Motion Planning

Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models

no code implementations27 Oct 2023 Pushkal Katara, Zhou Xian, Katerina Fragkiadaki

We propose Generation to Simulation (Gen2Sim), a method for scaling up robot skill learning in simulation by automating generation of 3D assets, task descriptions, task decompositions and reward functions using large pre-trained generative models of language and vision.

reinforcement-learning

Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation

2 code implementations30 Jun 2023 Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, Katerina Fragkiadaki

3D perceptual representations are well suited for robot manipulation as they easily encode occlusions and simplify spatial reasoning.

Action Detection Pose Prediction +1

Towards Generalist Robots: A Promising Paradigm via Generative Simulation

no code implementations17 May 2023 Zhou Xian, Theophile Gervet, Zhenjia Xu, Yi-Ling Qiao, Tsun-Hsuan Wang, Yian Wang

This document serves as a position paper that outlines the authors' vision for a potential pathway towards generalist robots.

Scene Generation

Energy-based Models are Zero-Shot Planners for Compositional Scene Rearrangement

no code implementations27 Apr 2023 Nikolaos Gkanatsios, Ayush Jain, Zhou Xian, Yunchu Zhang, Christopher Atkeson, Katerina Fragkiadaki

Language is compositional; an instruction can express multiple relation constraints to hold among objects in a scene that a robot is tasked to rearrange.

Language Modelling Large Language Model

FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation

1 code implementation4 Mar 2023 Zhou Xian, Bo Zhu, Zhenjia Xu, Hsiao-Yu Tung, Antonio Torralba, Katerina Fragkiadaki, Chuang Gan

We identify several challenges for fluid manipulation learning by evaluating a set of reinforcement learning and trajectory optimization methods on our platform.

Benchmarking

HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks

no code implementations17 Mar 2021 Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Emmanouil Antonios Platanios, Katerina Fragkiadaki

We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent's interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system.

Attribute Meta-Learning

HyperDynamics: Generating Expert Dynamics Models by Observation

no code implementations ICLR 2021 Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Emmanouil Antonios Platanios, Katerina Fragkiadaki

We propose HyperDynamics, a framework that conditions on an agent’s interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system.

Attribute

3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators

no code implementations12 Nov 2020 Hsiao-Yu Fish Tung, Zhou Xian, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki

Object motion predictions are computed by a graph neural network that operates over the object features extracted from the 3D neural scene representation.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.