Search Results for author: Christian A. Schroeder de Witt

Found 4 papers, 3 papers with code

A Self-Supervised Auxiliary Loss for Deep RL in Partially Observable Settings

no code implementations17 Apr 2021 Eltayeb Ahmed, Luisa Zintgraf, Christian A. Schroeder de Witt, Nicolas Usunier

In this work we explore an auxiliary loss useful for reinforcement learning in environments where strong performing agents are required to be able to navigate a spatial environment.

Navigate

Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning

2 code implementations7 Jun 2020 Shariq Iqbal, Christian A. Schroeder de Witt, Bei Peng, Wendelin Böhmer, Shimon Whiteson, Fei Sha

Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities; however, common patterns of behavior often emerge among these agents/entities.

counterfactual Multi-agent Reinforcement Learning +3

FACMAC: Factored Multi-Agent Centralised Policy Gradients

3 code implementations NeurIPS 2021 Bei Peng, Tabish Rashid, Christian A. Schroeder de Witt, Pierre-Alexandre Kamienny, Philip H. S. Torr, Wendelin Böhmer, Shimon Whiteson

We propose FACtored Multi-Agent Centralised policy gradients (FACMAC), a new method for cooperative multi-agent reinforcement learning in both discrete and continuous action spaces.

Q-Learning SMAC +2

Cannot find the paper you are looking for? You can Submit a new open access paper.