SMAC

38 papers with code • 11 benchmarks • 1 datasets

The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent.

Libraries

Use these libraries to find SMAC models and implementations
2 papers
1,716
2 papers
708

Datasets


Most implemented papers

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

oxwhirl/pymarl 19 Mar 2020

At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted.

Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning

sisl/DICG 19 Jun 2020

Coordination graph based formalization allows reasoning about the joint action based on the structure of interactions.

Additive Tree-Structured Conditional Parameter Spaces in Bayesian Optimization: A Novel Covariance Function and a Fast Implementation

maxc01/addtree 6 Oct 2020

Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.

Graph Convolutional Value Decomposition in Multi-Agent Reinforcement Learning

navid-naderi/GraphMIX 9 Oct 2020

We propose a novel framework for value function factorization in multi-agent deep reinforcement learning (MARL) using graph neural networks (GNNs).

AutoWeka4MCPS-AVATAR: Accelerating Automated Machine Learning Pipeline Composition and Optimisation

UTS-AAi/autoweka 21 Nov 2020

Instead of executing the original ML pipeline to evaluate its validity, the AVATAR evaluates its surrogate model constructed by capabilities and effects of the ML pipeline components and input/output simplified mappings.

QVMix and QVMix-Max: Extending the Deep Quality-Value Family of Algorithms to Cooperative Multi-Agent Reinforcement Learning

PaLeroy/QVMix 22 Dec 2020

This paper introduces four new algorithms that can be used for tackling multi-agent reinforcement learning (MARL) problems occurring in cooperative settings.

UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers

hhhusiyi-monash/UPDeT 20 Jan 2021

Recent advances in multi-agent reinforcement learning have been largely limited in training one model from scratch for every new task.

DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning

j3soon/dfac 16 Feb 2021

In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents.

SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning

hsvgbkhgbv/shapley-q-learning 31 May 2021

This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory.

Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks

reinholdm/offline-pre-trained-multi-agent-decision-transformer 6 Dec 2021

In this paper, we facilitate the research by providing large-scale datasets, and use them to examine the usage of the Decision Transformer in the context of MARL.