Search Results for author: Matthia Sabatelli

Found 14 papers, 6 papers with code

VDSC: Enhancing Exploration Timing with Value Discrepancy and State Counts

no code implementations26 Mar 2024 Marius Captari, Remo Sasso, Matthia Sabatelli

While more sophisticated exploration strategies can excel in specific, often sparse reward environments, existing simpler approaches, such as $\epsilon$-greedy, persist in outperforming them across a broader spectrum of domains.

Efficient Exploration

Bridging the Reality Gap of Reinforcement Learning based Traffic Signal Control using Domain Randomization and Meta Learning

no code implementations21 Jul 2023 Arthur Müller, Matthia Sabatelli

Subsequently, we evaluated the performance of the two methods on a separate model of the same intersection that was developed with a different traffic simulator.

Meta-Learning Reinforcement Learning (RL)

Factors of Influence of the Overestimation Bias of Q-Learning

1 code implementation11 Oct 2022 Julius Wagenbach, Matthia Sabatelli

We study whether the learning rate $\alpha$, the discount factor $\gamma$ and the reward signal $r$ have an influence on the overestimation bias of the Q-Learning algorithm.

Q-Learning

Machine Learning Students Overfit to Overfitting

no code implementations7 Sep 2022 Matias Valdenegro-Toro, Matthia Sabatelli

Overfitting and generalization is an important concept in Machine Learning as only models that generalize are interesting for general applications.

Misconceptions

How Well Do Vision Transformers (VTs) Transfer To The Non-Natural Image Domain? An Empirical Study Involving Art Classification

1 code implementation9 Aug 2022 Vincent Tonkes, Matthia Sabatelli

Vision Transformers (VTs) are becoming a valuable alternative to Convolutional Neural Networks (CNNs) when it comes to problems involving high-dimensional and spatially organized inputs such as images.

Transfer Learning

Multi-Source Transfer Learning for Deep Model-Based Reinforcement Learning

no code implementations28 May 2022 Remo Sasso, Matthia Sabatelli, Marco A. Wiering

A crucial challenge in reinforcement learning is to reduce the number of interactions with the environment that an agent requires to master a given task.

Continuous Control Model-based Reinforcement Learning +3

On The Transferability of Deep-Q Networks

no code implementations6 Oct 2021 Matthia Sabatelli, Pierre Geurts

Transfer Learning (TL) is an efficient machine learning paradigm that allows overcoming some of the hurdles that characterize the successful training of deep neural networks, ranging from long training times to the needs of large datasets.

Transfer Learning

Fractional Transfer Learning for Deep Model-Based Reinforcement Learning

no code implementations14 Aug 2021 Remo Sasso, Matthia Sabatelli, Marco A. Wiering

Reinforcement learning (RL) is well known for requiring large amounts of data in order for RL agents to learn to perform complex tasks.

Model-based Reinforcement Learning reinforcement-learning +2

On the Transferability of Winning Tickets in Non-Natural Image Datasets

no code implementations11 May 2020 Matthia Sabatelli, Mike Kestemont, Pierre Geurts

We study the generalization properties of pruned neural networks that are the winners of the lottery ticket hypothesis on datasets of natural images.

Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning algorithms

3 code implementations1 Sep 2019 Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

This paper makes one step forward towards characterizing a new family of \textit{model-free} Deep Reinforcement Learning (DRL) algorithms.

Q-Learning

Deep Quality-Value (DQV) Learning

3 code implementations30 Sep 2018 Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning.

Atari Games Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.