Search Results for author: Chen Tessler

Found 16 papers, 5 papers with code

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters

no code implementations2 May 2023 Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue Bin Peng

In this work, we present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.

Imitation Learning

Towards Autonomous Grading In The Real World

no code implementations13 Jun 2022 Yakov Miron, Chana Ross, Yuval Goldfracht, Chen Tessler, Dotan Di Castro

As the heuristics are capable of successfully solving the task in the simulated environment, we show they can be leveraged to guide a learning agent which can generalize and solve the task both in simulation and in a scaled prototype environment.

Ensemble Bootstrapping for Q-Learning

no code implementations28 Feb 2021 Oren Peer, Chen Tessler, Nadav Merlis, Ron Meir

Finally, We demonstrate the superior performance of a deep RL variant of EBQL over other deep QL algorithms for a suite of ATARI games.

Atari Games Q-Learning

Reward Tweaking: Maximizing the Total Reward While Planning for Short Horizons

no code implementations9 Feb 2020 Chen Tessler, Shie Mannor

In reinforcement learning, the discount factor $\gamma$ controls the agent's effective planning horizon.

Continuous Control reinforcement-learning +1

Never Worse, Mostly Better: Stable Policy Improvement in Deep Reinforcement Learning

no code implementations2 Oct 2019 Pranav Khanna, Guy Tennenholtz, Nadav Merlis, Shie Mannor, Chen Tessler

In recent years, there has been significant progress in applying deep reinforcement learning (RL) for solving challenging problems across a wide variety of domains.

Continuous Control reinforcement-learning +1

Contextual Inverse Reinforcement Learning

no code implementations25 Sep 2019 Philip Korsunsky, Stav Belogolovsky, Tom Zahavy, Chen Tessler, Shie Mannor

In this setting, the reward, which is unknown to the agent, is a function of a static parameter referred to as the context.

reinforcement-learning Reinforcement Learning (RL)

Stabilizing Off-Policy Reinforcement Learning with Conservative Policy Gradients

no code implementations25 Sep 2019 Chen Tessler, Nadav Merlis, Shie Mannor

In recent years, advances in deep learning have enabled the application of reinforcement learning algorithms in complex domains.

reinforcement-learning Reinforcement Learning (RL)

Inverse Reinforcement Learning in Contextual MDPs

2 code implementations23 May 2019 Stav Belogolovsky, Philip Korsunsky, Shie Mannor, Chen Tessler, Tom Zahavy

Most importantly, we show both theoretically and empirically that our algorithms perform zero-shot transfer (generalize to new and unseen contexts).

Autonomous Driving reinforcement-learning +1

Distributional Policy Optimization: An Alternative Approach for Continuous Control

3 code implementations NeurIPS 2019 Chen Tessler, Guy Tennenholtz, Shie Mannor

We show that optimizing over such sets results in local movement in the action space and thus convergence to sub-optimal solutions.

Continuous Control Policy Gradient Methods

Action Assembly: Sparse Imitation Learning for Text Based Games with Combinatorial Action Spaces

no code implementations23 May 2019 Chen Tessler, Tom Zahavy, Deborah Cohen, Daniel J. Mankowitz, Shie Mannor

We propose a computationally efficient algorithm that combines compressed sensing with imitation learning to solve text-based games with combinatorial action spaces.

Imitation Learning text-based games +1

A Deep Hierarchical Approach to Lifelong Learning in Minecraft

no code implementations25 Apr 2016 Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, Shie Mannor

Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network.

Cannot find the paper you are looking for? You can Submit a new open access paper.