Search Results for author: Johannes Ackermann

Found 5 papers, 4 papers with code

Offline Reinforcement Learning from Datasets with Structured Non-Stationarity

1 code implementation23 May 2024 Johannes Ackermann, Takayuki Osa, Masashi Sugiyama

Current Reinforcement Learning (RL) is often limited by the large amount of data needed to learn a successful policy.

Continuous Control Offline RL +2

High-Resolution Image Editing via Multi-Stage Blended Diffusion

1 code implementation24 Oct 2022 Johannes Ackermann, Minjun Li

We first use Blended Diffusion to edit the image at a low resolution, and then upscale it in multiple stages, using a super-resolution model and Blended Diffusion.

Image Inpainting Super-Resolution +1

Unsupervised Task Clustering for Multi-Task Reinforcement Learning

1 code implementation1 Jan 2021 Johannes Ackermann, Oliver Paul Richter, Roger Wattenhofer

We show the generality of our approach by evaluating on simple discrete and continuous control tasks, as well as complex bipedal walker tasks and Atari games.

Atari Games Clustering +5

Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics

3 code implementations3 Oct 2019 Johannes Ackermann, Volker Gabler, Takayuki Osa, Masashi Sugiyama

Finally, we investigate the application of multi-agent methods to high-dimensional robotic tasks and show that our approach can be used to learn decentralized policies in this domain.

Multi-agent Reinforcement Learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.