Search Results for author: Himanshu Sahni

Found 7 papers, 3 papers with code

Estimating Q(s,s') with Deterministic Dynamics Gradients

no code implementations ICML 2020 Ashley Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of a value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Transfer Learning

Hard Attention Control By Mutual Information Maximization

no code implementations10 Mar 2021 Himanshu Sahni, Charles Isbell

We also show that the agent's internal representation of the surroundings, a live mental map, can be used for control in two partially observable reinforcement learning tasks.

Hard Attention

Estimating Q(s,s') with Deep Deterministic Dynamics Gradients

1 code implementation21 Feb 2020 Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Imitation Learning Transfer Learning

Addressing Sample Complexity in Visual Tasks Using HER and Hallucinatory GANs

2 code implementations NeurIPS 2019 Himanshu Sahni, Toby Buckley, Pieter Abbeel, Ilya Kuzovkin

In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal.

Imitating Latent Policies from Observation

2 code implementations21 May 2018 Ashley D. Edwards, Himanshu Sahni, Yannick Schroecker, Charles L. Isbell

In this paper, we describe a novel approach to imitation learning that infers latent policies directly from state observations.

Imitation Learning

Learning to Compose Skills

no code implementations30 Nov 2017 Himanshu Sahni, Saurabh Kumar, Farhan Tejani, Charles Isbell

We present a differentiable framework capable of learning a wide variety of compositions of simple policies that we call skills.

State Space Decomposition and Subgoal Creation for Transfer in Deep Reinforcement Learning

no code implementations24 May 2017 Himanshu Sahni, Saurabh Kumar, Farhan Tejani, Yannick Schroecker, Charles Isbell

To address this issue, we develop a framework through which a deep RL agent learns to generalize policies from smaller, simpler domains to more complex ones using a recurrent attention mechanism.

reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.