Search Results for author: Luca Sabbioni

Found 5 papers, 1 papers with code

Stepsize Learning for Policy Gradient Methods in Contextual Markov Decision Processes

no code implementations13 Jun 2023 Luca Sabbioni, Francesco Corda, Marcello Restelli

Policy-based algorithms are among the most widely adopted techniques in model-free RL, thanks to their strong theoretical groundings and good properties in continuous action spaces.

Meta Reinforcement Learning Policy Gradient Methods

Simultaneously Updating All Persistence Values in Reinforcement Learning

no code implementations21 Nov 2022 Luca Sabbioni, Luca Al Daire, Lorenzo Bisi, Alberto Maria Metelli, Marcello Restelli

In reinforcement learning, the performance of learning agents is highly sensitive to the choice of time discretization.

Atari Games Q-Learning +2

Meta Learning the Step Size in Policy Gradient Methods

no code implementations ICML Workshop AutoML 2021 Luca Sabbioni, Francesco Corda, Marcello Restelli

Policy-based algorithms are among the most widely adopted techniques in model-free RL, thanks to their strong theoretical groundings and good properties in continuous action spaces.

Meta-Learning Meta Reinforcement Learning +1

Control Frequency Adaptation via Action Persistence in Batch Reinforcement Learning

1 code implementation ICML 2020 Alberto Maria Metelli, Flavio Mazzolini, Lorenzo Bisi, Luca Sabbioni, Marcello Restelli

The choice of the control frequency of a system has a relevant impact on the ability of reinforcement learning algorithms to learn a highly performing policy.

reinforcement-learning Reinforcement Learning (RL)

Risk-Averse Trust Region Optimization for Reward-Volatility Reduction

no code implementations6 Dec 2019 Lorenzo Bisi, Luca Sabbioni, Edoardo Vittori, Matteo Papini, Marcello Restelli

In real-world decision-making problems, for instance in the fields of finance, robotics or autonomous driving, keeping uncertainty under control is as important as maximizing expected returns.

Autonomous Driving Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.