Paper

Rethinking the Implementation Matters in Cooperative Multi-Agent Reinforcement Learning

Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as computer games and robot swarms. QMIX, a widely popular MARL algorithm, has been used to solve cooperative tasks, e.g. Starcraft Multi-Agent Challenge (SMAC), Difficulty-Enhanced Predator-Prey (DEPP). Recent variants of QMIX target relaxing the monotonicity constraint of QMIX, allowing for performance improvement in SMAC. However, in this paper, we investigate the code-level optimizations of these variants and the monotonicity constraint. We find that (1) such improvements of the variants are significantly affected by various code-level optimizations; (2) QMIX with normalized optimizations outperforms other previous works in SMAC; (3) the monotonicity constraint may improve sample efficiency in SMAC and DEPP. Last, a discussion with theoretical analysis is demonstrated about why QMIX works well in SMAC. We open-source the code at \url{https://github.com/hijkzzz/pymarl2}.

Results in Papers With Code
(↓ scroll down to see all results)