no code implementations • 4 Feb 2025 • Xiaowen Qiu, Jincheng Yang, Yian Wang, Zhehuan Chen, YuFei Wang, Tsun-Hsuan Wang, Zhou Xian, Chuang Gan
3D articulated objects modeling has long been a challenging problem, since it requires to capture both accurate surface geometries and semantically meaningful and spatially precise structures, parts, and joints.
no code implementations • 14 Nov 2024 • Yian Wang, Xiaowen Qiu, Jiageng Liu, Zhehuan Chen, Jiting Cai, YuFei Wang, Tsun-Hsuan Wang, Zhou Xian, Chuang Gan
Creating large-scale interactive 3D environments is essential for the development of Robotics and Embodied AI research.
1 code implementation • 6 Feb 2024 • YuFei Wang, Zhanyi Sun, Jesse Zhang, Zhou Xian, Erdem Biyik, David Held, Zackory Erickson
Reward engineering has long been a challenge in Reinforcement Learning (RL) research, as it often requires extensive human effort and iterative processes of trial-and-error to design effective reward functions.
no code implementations • 2 Nov 2023 • YuFei Wang, Zhou Xian, Feng Chen, Tsun-Hsuan Wang, Yian Wang, Katerina Fragkiadaki, Zackory Erickson, David Held, Chuang Gan
We present RoboGen, a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
no code implementations • 27 Oct 2023 • Pushkal Katara, Zhou Xian, Katerina Fragkiadaki
We propose Generation to Simulation (Gen2Sim), a method for scaling up robot skill learning in simulation by automating generation of 3D assets, task descriptions, task decompositions and reward functions using large pre-trained generative models of language and vision.
2 code implementations • 30 Jun 2023 • Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, Katerina Fragkiadaki
3D perceptual representations are well suited for robot manipulation as they easily encode occlusions and simplify spatial reasoning.
Ranked #6 on
Robot Manipulation
on RLBench
no code implementations • 17 May 2023 • Zhou Xian, Theophile Gervet, Zhenjia Xu, Yi-Ling Qiao, Tsun-Hsuan Wang, Yian Wang
This document serves as a position paper that outlines the authors' vision for a potential pathway towards generalist robots.
no code implementations • 27 Apr 2023 • Nikolaos Gkanatsios, Ayush Jain, Zhou Xian, Yunchu Zhang, Christopher Atkeson, Katerina Fragkiadaki
Language is compositional; an instruction can express multiple relation constraints to hold among objects in a scene that a robot is tasked to rearrange.
no code implementations • 16 Mar 2023 • Tsun-Hsuan Wang, Pingchuan Ma, Andrew Everett Spielberg, Zhou Xian, Hao Zhang, Joshua B. Tenenbaum, Daniela Rus, Chuang Gan
Existing work has typically been tailored for particular environments or representations.
1 code implementation • 4 Mar 2023 • Zhou Xian, Bo Zhu, Zhenjia Xu, Hsiao-Yu Tung, Antonio Torralba, Katerina Fragkiadaki, Chuang Gan
We identify several challenges for fluid manipulation learning by evaluating a set of reinforcement learning and trajectory optimization methods on our platform.
no code implementations • 17 Mar 2021 • Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Emmanouil Antonios Platanios, Katerina Fragkiadaki
We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent's interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system.
no code implementations • ICLR 2021 • Zhou Xian, Shamit Lal, Hsiao-Yu Tung, Emmanouil Antonios Platanios, Katerina Fragkiadaki
We propose HyperDynamics, a framework that conditions on an agent’s interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system.
no code implementations • 12 Nov 2020 • Hsiao-Yu Fish Tung, Zhou Xian, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki
Object motion predictions are computed by a graph neural network that operates over the object features extracted from the 3D neural scene representation.
no code implementations • 11 Jul 2019 • Maximilian Sieb, Zhou Xian, Audrey Huang, Oliver Kroemer, Katerina Fragkiadaki
We cast visual imitation as a visual correspondence problem.