1 code implementation • 10 Jan 2023 • Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David Hoeller, Jia Lin Yuan, Ritvik Singh, Yunrong Guo, Hammad Mazhar, Ajay Mandlekar, Buck Babich, Gavriel State, Marco Hutter, Animesh Garg
We present Orbit, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim.
no code implementations • 26 Sep 2022 • Nikita Rudin, David Hoeller, Marko Bjelonic, Marco Hutter
It is free to select its path and the locomotion gait.
no code implementations • 16 Jun 2022 • David Hoeller, Nikita Rudin, Christopher Choy, Animashree Anandkumar, Marco Hutter
We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments.
4 code implementations • 24 Sep 2021 • Nikita Rudin, David Hoeller, Philipp Reist, Marco Hutter
In this work, we present and study a training set-up that achieves fast policy generation for real-world robotic tasks by using massive parallelism on a single workstation GPU.
4 code implementations • 24 Aug 2021 • Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, Gavriel State
Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU.
1 code implementation • 18 Mar 2021 • Mayank Mittal, David Hoeller, Farbod Farshidian, Marco Hutter, Animesh Garg
A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles.
no code implementations • 7 Mar 2021 • David Hoeller, Lorenz Wellhausen, Farbod Farshidian, Marco Hutter
We show that decoupling the pipeline into these components results in a sample efficient policy learning stage that can be fully trained in simulation in just a dozen minutes.
no code implementations • 21 Sep 2020 • Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Animashree Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg
We present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped (the Unitree Laikago).
no code implementations • 8 Oct 2019 • Farbod Farshidian, David Hoeller, Marco Hutter
The DMPC actor is a Model Predictive Control (MPC) optimizer with an objective function defined in terms of a value function estimated by the critic.