Search Results for author: Edward Groshev

Found 3 papers, 0 papers with code

Sub-Goal Trees -- a Framework for Goal-Based Reinforcement Learning

no code implementations ICML 2020 Tom Jurgenson, Or Avner, Edward Groshev, Aviv Tamar

Reinforcement learning (RL), building on Bellman's optimality equation, naturally optimizes for a single goal, yet can be made multi-goal by augmenting the state with the goal.

Motion Planning reinforcement-learning +1

Sub-Goal Trees -- a Framework for Goal-Directed Trajectory Prediction and Optimization

no code implementations12 Jun 2019 Tom Jurgenson, Edward Groshev, Aviv Tamar

In such problems, the way we choose to represent a trajectory underlies algorithms for trajectory prediction and optimization.

Motion Planning reinforcement-learning +2

Learning Generalized Reactive Policies using Deep Neural Networks

no code implementations24 Aug 2017 Edward Groshev, Maxwell Goldstein, Aviv Tamar, Siddharth Srivastava, Pieter Abbeel

We show that a deep neural network can be used to learn and represent a \emph{generalized reactive policy} (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances.

Decision Making feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.