We introduce PACOH-RL, a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.
In this paper, we model perception failures as invisible obstacles and pits, and train a reinforcement learning (RL) based local navigation policy to guide our legged robot.
Elevation maps are commonly used to represent the environment of mobile robots and are instrumental for locomotion and navigation tasks.
Instead of relying on a value expectation, we estimate the complete value distribution to account for uncertainty in the robot's interaction with the environment.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
no code implementations • 10 Jan 2023 • Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David Hoeller, Jia Lin Yuan, Pooria Poorsarvi Tehrani, Ritvik Singh, Yunrong Guo, Hammad Mazhar, Ajay Mandlekar, Buck Babich, Gavriel State, Marco Hutter, Animesh Garg
We present ORBIT, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim.
Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation.
We use meta reinforcement learning to train a locomotion policy that can quickly adapt to different designs.
no code implementations • 2 Aug 2022 • Robin Schmid, Deegan Atha, Frederik Schöller, Sharmita Dey, Seyed Fakoorian, Kyohei Otsu, Barry Ridge, Marko Bjelonic, Lorenz Wellhausen, Marco Hutter, Ali-akbar Agha-mohammadi
Typically, this depends on a semantic understanding which is based on supervised learning from images annotated by a human expert.
We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments.
Imitation learning approaches such as adversarial motion priors aim to reduce this problem by encouraging a pre-defined motion style.
LiDAR-based localization and mapping is one of the core components in many modern robotic systems due to the direct integration of range and geometry, allowing for precise motion estimation and generation of high quality maps in real-time.
In this work, we propose the novel approach Deep Measurement Update (DMU) as a general update rule for a wide range of systems.
In this work, we present and study a training set-up that achieves fast policy generation for real-world robotic tasks by using massive parallelism on a single workstation GPU.
We first evaluate a supervised learning approach on synthetic data for which we have the full ground-truth available and subsequently move to several real-world datasets.
In this paper, we introduce a method for visual relocalization using the geometric information from a 3D surfel map.
We present a learning algorithm for training a single policy that imitates multiple gaits of a walking robot.
A kitchen assistant needs to operate human-scale objects, such as cabinets and ovens, in unmapped environments with dynamic obstacles.
We show that decoupling the pipeline into these components results in a sample efficient policy learning stage that can be fully trained in simulation in just a dozen minutes.
Reliable robot pose estimation is a key building block of many robot autonomy pipelines, with LiDAR localization being an active research domain.
The trained controller has taken two generations of quadrupedal ANYmal robots to a variety of natural environments that are beyond the reach of prior published work in legged locomotion.
no code implementations • 4 Dec 2019 • Abel Gawel, Hermann Blum, Johannes Pankert, Koen Krämer, Luca Bartolomei, Selen Ercan, Farbod Farshidian, Margarita Chli, Fabio Gramazio, Roland Siegwart, Marco Hutter, Timothy Sandy
We present a fully-integrated sensing and control system which enables mobile manipulator robots to execute building tasks with millimeter-scale accuracy on building construction sites.
This paper addresses the problem of legged locomotion in non-flat terrain.
In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes.
We experimentally validate our approach on the quadrupedal robot ANYmal, which is a dog-sized quadrupedal system with 12 degrees of freedom.
In this work we present a whole-body Nonlinear Model Predictive Control approach for Rigid Body Systems subject to contacts.
A novel method for visual place recognition is introduced and evaluated, demonstrating robustness to perceptual aliasing and observation noise.