Search Results for author: Anusha Nagabandi

Found 8 papers, 6 papers with code

Deep Dynamics Models for Learning Dexterous Manipulation

2 code implementations25 Sep 2019 Anusha Nagabandi, Kurt Konoglie, Sergey Levine, Vikash Kumar

Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills.

Model Predictive Control

Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning

8 code implementations8 Aug 2017 Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, Sergey Levine

Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance.

Model-based Reinforcement Learning Model Predictive Control +2

Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning

2 code implementations ICLR 2019 Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald S. Fearing, Pieter Abbeel, Sergey Levine, Chelsea Finn

Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.

Continuous Control Meta-Learning +5

MELD: Meta-Reinforcement Learning from Images via Latent State Models

1 code implementation26 Oct 2020 Tony Z. Zhao, Anusha Nagabandi, Kate Rakelly, Chelsea Finn, Sergey Levine

Meta-reinforcement learning algorithms can enable autonomous agents, such as robots, to quickly acquire new behaviors by leveraging prior experience in a set of related training tasks.

Meta-Learning Meta Reinforcement Learning +3

Model-Based Reinforcement Learning via Latent-Space Collocation

1 code implementation24 Jun 2021 Oleh Rybkin, Chuning Zhu, Anusha Nagabandi, Kostas Daniilidis, Igor Mordatch, Sergey Levine

The resulting latent collocation method (LatCo) optimizes trajectories of latent states, which improves over previously proposed shooting methods for visual model-based RL on tasks with sparse rewards and long-term goals.

Model-based Reinforcement Learning reinforcement-learning +1

Learning Image-Conditioned Dynamics Models for Control of Under-actuated Legged Millirobots

no code implementations14 Nov 2017 Anusha Nagabandi, Guangzhao Yang, Thomas Asmar, Ravi Pandya, Gregory Kahn, Sergey Levine, Ronald S. Fearing

We present an approach for controlling a real-world legged millirobot that is based on learned neural network models.

Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL

no code implementations ICLR 2019 Anusha Nagabandi, Chelsea Finn, Sergey Levine

The goal in this paper is to develop a method for continual online learning from an incoming stream of data, using deep neural network models.

Meta-Learning Model-based Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.