Search Results for author: Shikhar Bahl

Found 7 papers, 4 papers with code

Hierarchical Neural Dynamic Policies

no code implementations12 Jul 2021 Shikhar Bahl, Abhinav Gupta, Deepak Pathak

We tackle the problem of generalization to unseen configurations for dynamic tasks in the real world while learning from high-dimensional image input.

Neural Dynamic Policies for End-to-End Sensorimotor Learning

no code implementations NeurIPS 2020 Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak

We show that NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks for both imitation and reinforcement learning setups.

Imitation Learning

Contextual Imagined Goals for Self-Supervised Robotic Learning

1 code implementation23 Oct 2019 Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, Sergey Levine

When the robot's environment and available objects vary, as they do in most open-world settings, the robot must propose to itself only those goals that it can accomplish in its present setting with the objects that are at hand.

Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards

1 code implementation13 Jun 2019 Gerrit Schoettler, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

Connector insertion and many other tasks commonly found in modern manufacturing settings involve complex contact dynamics and friction.

Skew-Fit: State-Covering Self-Supervised Reinforcement Learning

1 code implementation ICML 2020 Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, Sergey Levine

Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills.

Residual Reinforcement Learning for Robot Control

no code implementations7 Dec 2018 Tobias Johannink, Shikhar Bahl, Ashvin Nair, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL.

Visual Reinforcement Learning with Imagined Goals

2 code implementations NeurIPS 2018 Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, Sergey Levine

For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.