Search Results for author: Seth Austin Harding

Found 3 papers, 1 papers with code

Revisiting the Monotonicity Constraint in Cooperative Multi-Agent Reinforcement Learning

no code implementations29 Sep 2021 Jian Hu, Siyang Jiang, Seth Austin Harding, Haibin Wu, Shih-wei Liao

QMIX, a popular MARL algorithm based on the monotonicity constraint, has been used as a baseline for the benchmark environments, such as Starcraft Multi-Agent Challenge (SMAC), Predator-Prey (PP).

reinforcement-learning Reinforcement Learning (RL) +2

Rethinking the Implementation Matters in Cooperative Multi-Agent Reinforcement Learning

2 code implementations6 Feb 2021 Jian Hu, Siyang Jiang, Seth Austin Harding, Haibin Wu, Shih-wei Liao

Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as computer games and robot swarms.

reinforcement-learning Reinforcement Learning (RL) +3

QR-MIX: Distributional Value Function Factorisation for Cooperative Multi-Agent Reinforcement Learning

no code implementations9 Sep 2020 Jian Hu, Seth Austin Harding, Haibin Wu, Siyue Hu, Shih-wei Liao

Existing methods such as Value Decomposition Network (VDN) and QMIX estimate the value of long-term returns as a scalar that does not contain the information of randomness.

reinforcement-learning Reinforcement Learning (RL) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.