Recent years have witnessed an emerging paradigm shift toward embodied artificial intelligence, in which an agent must learn to solve challenging tasks by interacting with its environment.
no code implementations • 28 Oct 2021 • Nicholas Roy, Ingmar Posner, Tim Barfoot, Philippe Beaudoin, Yoshua Bengio, Jeannette Bohg, Oliver Brock, Isabelle Depatie, Dieter Fox, Dan Koditschek, Tomas Lozano-Perez, Vikash Mansinghka, Christopher Pal, Blake Richards, Dorsa Sadigh, Stefan Schaal, Gaurav Sukhatme, Denis Therien, Marc Toussaint, Michiel Van de Panne
Machine learning has long since become a keystone technology, accelerating science and applications in a broad range of domains.
Language-guided robots performing home and office tasks must navigate in and interact with the world.
We explore possible methods for multi-task transfer learning which seek to exploit the shared physical structure of robotics tasks.
In this work we aim to solve this problem by optimizing the efficiency and resource utilization of reinforcement learning algorithms instead of relying on distributed computation.
We present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures.
This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time.
Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning.
A new mechanism for efficiently solving the Markov decision processes (MDPs) is proposed in this paper.
The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states.
We complete unseen tasks by choosing new sequences of skill latents to control the robot using MPC, where our MPC model is composed of the pre-trained skill policy executed in the simulation environment, run in parallel with the real robot.
In particular, we first use simulation to jointly learn a policy for a set of low-level skills, and a "skill embedding" parameterization which can be used to compose them.
In this work, we introduce a method based on region-growing that allows learning in an environment with any pair of initial and goal states.
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof.
Recent approaches in robotics follow the insight that perception is facilitated by interaction with the environment.
The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots.