Search Results for author: Tom Jurgenson

Found 6 papers, 3 papers with code

MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning

1 code implementation14 Mar 2024 Zohar Rimon, Tom Jurgenson, Orr Krupnik, Gilad Adler, Aviv Tamar

Meta-reinforcement learning (meta-RL) is a promising framework for tackling challenging domains requiring efficient exploration.

Efficient Exploration Meta Reinforcement Learning +1

Fine-Tuning Generative Models as an Inference Method for Robotic Tasks

1 code implementation19 Oct 2023 Orr Krupnik, Elisei Shafer, Tom Jurgenson, Aviv Tamar

Adaptable models could greatly benefit robotic agents operating in the real world, allowing them to deal with novel and varying conditions.

Bayesian Inference Point Cloud Completion

Goal-Conditioned Supervised Learning with Sub-Goal Prediction

no code implementations17 May 2023 Tom Jurgenson, Aviv Tamar

Based on this idea, we propose Trajectory Iterative Learner (TraIL), an extension of GCSL that further exploits the information in a trajectory, and uses it for learning to predict both actions and sub-goals.

Sub-Goal Trees -- a Framework for Goal-Based Reinforcement Learning

no code implementations ICML 2020 Tom Jurgenson, Or Avner, Edward Groshev, Aviv Tamar

Reinforcement learning (RL), building on Bellman's optimality equation, naturally optimizes for a single goal, yet can be made multi-goal by augmenting the state with the goal.

Motion Planning reinforcement-learning +1

Sub-Goal Trees -- a Framework for Goal-Directed Trajectory Prediction and Optimization

no code implementations12 Jun 2019 Tom Jurgenson, Edward Groshev, Aviv Tamar

In such problems, the way we choose to represent a trajectory underlies algorithms for trajectory prediction and optimization.

Motion Planning reinforcement-learning +2

Harnessing Reinforcement Learning for Neural Motion Planning

1 code implementation1 Jun 2019 Tom Jurgenson, Aviv Tamar

We then propose a modification of the popular DDPG RL algorithm that is tailored to motion planning domains, by exploiting the known model in the problem and the set of solved plans in the data.

Motion Planning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.