SMAC
38 papers with code • 11 benchmarks • 1 datasets
The StarCraft Multi-Agent Challenge (SMAC) is a benchmark that provides elements of partial observability, challenging dynamics, and high-dimensional observation spaces. SMAC is built using the StarCraft II game engine, creating a testbed for research in cooperative MARL where each game unit is an independent RL agent.
Libraries
Use these libraries to find SMAC models and implementationsMost implemented papers
Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted.
Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning
Coordination graph based formalization allows reasoning about the joint action based on the structure of interactions.
Additive Tree-Structured Conditional Parameter Spaces in Bayesian Optimization: A Novel Covariance Function and a Fast Implementation
Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.
Graph Convolutional Value Decomposition in Multi-Agent Reinforcement Learning
We propose a novel framework for value function factorization in multi-agent deep reinforcement learning (MARL) using graph neural networks (GNNs).
AutoWeka4MCPS-AVATAR: Accelerating Automated Machine Learning Pipeline Composition and Optimisation
Instead of executing the original ML pipeline to evaluate its validity, the AVATAR evaluates its surrogate model constructed by capabilities and effects of the ML pipeline components and input/output simplified mappings.
QVMix and QVMix-Max: Extending the Deep Quality-Value Family of Algorithms to Cooperative Multi-Agent Reinforcement Learning
This paper introduces four new algorithms that can be used for tackling multi-agent reinforcement learning (MARL) problems occurring in cooperative settings.
UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers
Recent advances in multi-agent reinforcement learning have been largely limited in training one model from scratch for every new task.
DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents.
SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning
This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory.
Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks
In this paper, we facilitate the research by providing large-scale datasets, and use them to examine the usage of the Decision Transformer in the context of MARL.