Value-Decomposition Networks For Cooperative Multi-Agent Learning

We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the "lazy agent" problem, which arises due to partial observability. We address these problems by training individual agents with a novel value decomposition network architecture, which learns to decompose the team value function into agent-wise value functions. We perform an experimental evaluation across a range of partially-observable multi-agent domains and show that learning such value-decompositions leads to superior results, in particular when combined with weight sharing, role information and information channels.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
SMAC+ Def_Armored_parallel VDN Median Win Rate 5.0 # 4
SMAC+ Def_Armored_sequential VDN Median Win Rate 96.9 # 2
SMAC+ Def_Infantry_parallel VDN Median Win Rate 95.0 # 3
SMAC+ Def_Infantry_sequential VDN Median Win Rate 96.9 # 5
SMAC+ Def_Outnumbered_parallel VDN Median Win Rate 0.0 # 4
SMAC+ Def_Outnumbered_sequential VDN Median Win Rate 15.6 # 4
SMAC+ Off_Complicated_parallel VDN Median Win Rate 70.0 # 2
SMAC+ Off_Distant_parallel VDN Median Win Rate 85.0 # 2
SMAC+ Off_Hard_parallel VDN Median Win Rate 15.0 # 2
SMAC+ Off_Near_parallel VDN Median Win Rate 90.0 # 3
SMAC+ Off_Superhard_parallel VDN Median Win Rate 0.0 # 1


No methods listed for this paper. Add relevant methods here