Search Results for author: Ido Greenberg

Found 7 papers, 4 papers with code

Dynamic Planning in Open-Ended Dialogue using Reinforcement Learning

no code implementations25 Jul 2022 Deborah Cohen, MoonKyung Ryu, Yinlam Chow, Orgad Keller, Ido Greenberg, Avinatan Hassidim, Michael Fink, Yossi Matias, Idan Szpektor, Craig Boutilier, Gal Elidan

Despite recent advances in natural language understanding and generation, and decades of research on the development of conversational bots, building automated agents that can carry on rich open-ended conversations with humans "in the wild" remains a formidable challenge.

Natural Language Understanding reinforcement-learning +1

Efficient Risk-Averse Reinforcement Learning

2 code implementations10 May 2022 Ido Greenberg, Yinlam Chow, Mohammad Ghavamzadeh, Shie Mannor

In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.

Autonomous Driving reinforcement-learning +1

Continuous Forecasting via Neural Eigen Decomposition

1 code implementation31 Jan 2022 Stav Belogolovsky, Ido Greenberg, Danny Eitan, Shie Mannor

Neural differential equations predict the derivative of a stochastic process.

The Fragility of Noise Estimation in Kalman Filter: Optimization Can Handle Model-Misspecification

1 code implementation6 Apr 2021 Ido Greenberg, Shie Mannor, Netanel Yannay

The Kalman Filter (KF) parameters are traditionally determined by noise estimation, since under the KF assumptions, the state prediction errors are minimized when the parameters correspond to the noise covariance.

Noise Estimation

Detecting Rewards Deterioration in Episodic Reinforcement Learning

1 code implementation22 Oct 2020 Ido Greenberg, Shie Mannor

In many RL applications, once training ends, it is vital to detect any deterioration in the agent performance as soon as possible.

reinforcement-learning Reinforcement Learning (RL) +2

Drift Detection in Episodic Data: Detect When Your Agent Starts Faltering

no code implementations28 Sep 2020 Ido Greenberg, Shie Mannor

The statistical power of the new testing procedure is shown to outperform alternative tests - often by orders of magnitude - for a variety of environment modifications (which cause deterioration in agent performance).

Cannot find the paper you are looking for? You can Submit a new open access paper.