Search Results for author: Richard Valenzano

Found 5 papers, 2 papers with code

Opti Code Pro: A Heuristic Search-based Approach to Code Refactoring

no code implementations12 May 2023 Sourena Khanzadeh, Samad Alias Nyein Chan, Richard Valenzano, Manar Alalfi

This paper presents an approach that evaluates best-first search methods to code refactoring.

Learning Reward Machines: A Study in Partially Observable Reinforcement Learning

no code implementations17 Dec 2021 Rodrigo Toro Icarte, Ethan Waldie, Toryn Q. Klassen, Richard Valenzano, Margarita P. Castro, Sheila A. McIlraith

Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems.

Partially Observable Reinforcement Learning Problem Decomposition +2

Reward Machines: Exploiting Reward Function Structure in Reinforcement Learning

3 code implementations6 Oct 2020 Rodrigo Toro Icarte, Toryn Q. Klassen, Richard Valenzano, Sheila A. McIlraith

First, we propose reward machines, a type of finite state machine that supports the specification of reward functions while exposing reward function structure.

counterfactual Counterfactual Reasoning +3

Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning

1 code implementation ICML 2018 Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, Sheila Mcilraith

In this paper we propose Reward Machines {—} a type of finite state machine that supports the specification of reward functions while exposing reward function structure to the learner and supporting decomposition.

Hierarchical Reinforcement Learning Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.