Search Results for author: Christian Schroeder de Witt

Found 20 papers, 11 papers with code

Perfectly Secure Steganography Using Minimum Entropy Coupling

1 code implementation24 Oct 2022 Christian Schroeder de Witt, Samuel Sokota, J. Zico Kolter, Jakob Foerster, Martin Strohmeier

In this work, we show that a steganography procedure is perfectly secure under \citet{cachin_perfect}'s information theoretic-model of steganography if and only if it is induced by a coupling.

Equivariant Networks for Zero-Shot Coordination

1 code implementation21 Oct 2022 Darius Muglich, Christian Schroeder de Witt, Elise van der Pol, Shimon Whiteson, Jakob Foerster

Successful coordination in Dec-POMDPs requires agents to adopt robust strategies and interpretable styles of play for their partner.

Generalized Beliefs for Cooperative AI

no code implementations26 Jun 2022 Darius Muglich, Luisa Zintgraf, Christian Schroeder de Witt, Shimon Whiteson, Jakob Foerster

Self-play is a common paradigm for constructing solutions in Markov games that can yield optimal policies in collaborative settings.

Biological Evolution and Genetic Algorithms: Exploring the Space of Abstract Tile Self-Assembly

no code implementations28 May 2022 Christian Schroeder de Witt

The correctness of our GA implementation is demonstrated using a test-bed fitness function, and our JaTAM implementation is verified by classifying a well-known search space $S_{2, 8}$ based on two tile types.

Model-Free Opponent Shaping

1 code implementation3 May 2022 Chris Lu, Timon Willi, Christian Schroeder de Witt, Jakob Foerster

In general-sum games, the interaction of self-interested learning agents commonly leads to collectively worst-case outcomes, such as defect-defect in the iterated prisoner's dilemma (IPD).

Mirror Learning: A Unifying Framework of Policy Optimisation

1 code implementation7 Jan 2022 Jakub Grudzien Kuba, Christian Schroeder de Witt, Jakob Foerster

In contrast, in this paper we introduce a novel theoretical framework, named Mirror Learning, which provides theoretical guarantees to a large class of algorithms, including TRPO and PPO.

Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the Age of AI-NIDS

no code implementations23 Nov 2021 Christian Schroeder de Witt, Yongchao Huang, Philip H. S. Torr, Martin Strohmeier

We then argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions, and introduce a temporally extended multi-agent reinforcement learning framework in which the resultant dynamics can be studied.

Continual Learning Multi-agent Reinforcement Learning +2

Communicating via Markov Decision Processes

1 code implementation17 Jul 2021 Samuel Sokota, Christian Schroeder de Witt, Maximilian Igl, Luisa Zintgraf, Philip Torr, Martin Strohmeier, J. Zico Kolter, Shimon Whiteson, Jakob Foerster

We contribute a theoretically grounded approach to MCGs based on maximum entropy reinforcement learning and minimum entropy coupling that we call MEME.

Multi-agent Reinforcement Learning

Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?

4 code implementations18 Nov 2020 Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, Shimon Whiteson

Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function.

reinforcement-learning reinforcement Learning +2

Simulation-Based Inference for Global Health Decisions

2 code implementations14 May 2020 Christian Schroeder de Witt, Bradley Gram-Hansen, Nantas Nardelli, Andrew Gambardella, Rob Zinkov, Puneet Dokania, N. Siddharth, Ana Belen Espinosa-Gonzalez, Ara Darzi, Philip Torr, Atılım Güneş Baydin

The COVID-19 pandemic has highlighted the importance of in-silico epidemiological modelling in predicting the dynamics of infectious diseases to inform health policy and decision makers about suitable prevention and containment strategies.

Bayesian Inference Epidemiology

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

1 code implementation19 Mar 2020 Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson

At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted.

reinforcement-learning reinforcement Learning +2

Efficient Bayesian Inference for Nested Simulators

no code implementations pproximateinference AABI Symposium 2019 Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Saeid Naderiparizi, Adam Scibior, Andreas Munk, Frank Wood, Mehrdad Ghadiri, Philip Torr, Yee Whye Teh, Atilim Gunes Baydin, Tom Rainforth

We introduce two approaches for conducting efficient Bayesian inference in stochastic simulators containing nested stochastic sub-procedures, i. e., internal procedures for which the density cannot be calculated directly such as rejection sampling loops.

Bayesian Inference

Stratospheric Aerosol Injection as a Deep Reinforcement Learning Problem

no code implementations17 May 2019 Christian Schroeder de Witt, Thomas Hornigold

As global greenhouse gas emissions continue to rise, the use of stratospheric aerosol injection (SAI), a form of solar geoengineering, is increasingly considered in order to artificially mitigate climate change effects.

reinforcement-learning reinforcement Learning

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

13 code implementations ICML 2018 Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson

At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted.

Multi-agent Reinforcement Learning reinforcement-learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.