Search Results for author: Phillip Swazinna

Found 7 papers, 3 papers with code

Learning Control Policies for Variable Objectives from Offline Data

no code implementations11 Aug 2023 Marc Weber, Phillip Swazinna, Daniel Hein, Steffen Udluft, Volkmar Sterzing

Offline reinforcement learning provides a viable approach to obtain advanced control strategies for dynamical systems, in particular when direct interaction with the environment is not available.

reinforcement-learning

Automatic Trade-off Adaptation in Offline RL

no code implementations16 Jun 2023 Phillip Swazinna, Steffen Udluft, Thomas Runkler

Recently, offline RL algorithms have been proposed that remain adaptive at runtime.

Offline RL

User-Interactive Offline Reinforcement Learning

1 code implementation21 May 2022 Phillip Swazinna, Steffen Udluft, Thomas Runkler

At the same time, offline RL algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy.

Offline RL reinforcement-learning +1

Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning

1 code implementation14 Jan 2022 Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler

Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists.

Offline RL reinforcement-learning +1

Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning

no code implementations26 Nov 2021 Phillip Swazinna, Steffen Udluft, Thomas Runkler

Recently developed offline reinforcement learning algorithms have made it possible to learn policies directly from pre-collected datasets, giving rise to a new dilemma for practitioners: Since the performance the algorithms are able to deliver depends greatly on the dataset that is presented to them, practitioners need to pick the right dataset among the available ones.

reinforcement-learning Reinforcement Learning (RL)

Overcoming Model Bias for Robust Offline Deep Reinforcement Learning

no code implementations12 Aug 2020 Phillip Swazinna, Steffen Udluft, Thomas Runkler

State-of-the-art reinforcement learning algorithms mostly rely on being allowed to directly interact with their environment to collect millions of observations.

Continuous Control Offline RL +2

Cannot find the paper you are looking for? You can Submit a new open access paper.