Starcraft

18 papers with code · Playing Games

Starcraft I is a RTS game; the task is to train an agent to play the game.

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

StarCraft II: A New Challenge for Reinforcement Learning

16 Aug 2017deepmind/pysc2

Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player.

STARCRAFT STARCRAFT II

ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

NeurIPS 2017 facebookresearch/ELF

In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.

ATARI GAMES STARCRAFT

TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games

1 Nov 2016TorchCraft/TorchCraft

We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.

STARCRAFT

STARDATA: A StarCraft AI Research Dataset

7 Aug 2017TorchCraft/StarData

We provide full game state data along with the original replays that can be viewed in StarCraft. We illustrate the diversity of the data with various statistics and provide examples of tasks that benefit from the dataset.

IMITATION LEARNING STARCRAFT

A Dataset for StarCraft AI \& an Example of Armies Clustering

19 Nov 2012TorchCraft/StarData

This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components

STARCRAFT

MazeBase: A Sandbox for Learning from Games

23 Nov 2015facebook/MazeBase

This paper introduces MazeBase: an environment for simple 2D games, designed as a sandbox for machine learning approaches to reasoning and planning. Within it, we create 10 simple games embodying a range of algorithmic tasks (e.g. if-then statements or set negation).

STARCRAFT

Multi-platform Version of StarCraft: Brood War in a Docker Container: Technical Report

7 Jan 2018Games-and-Simulations/sc-docker

We present a dockerized version of a real-time strategy game StarCraft: Brood War, commonly used as a domain for AI research, with a pre-installed collection of AI developement tools supporting all the major types of StarCraft bots. This provides a convenient way to deploy StarCraft AIs on numerous hosts at once and across multiple platforms despite limited OS support of StarCraft.

STARCRAFT

The StarCraft Multi-Agent Challenge

11 Feb 2019oxwhirl/pymarl

A particularly challenging class of problems in this area is partially observable, cooperative, multi-agent learning, in which teams of agents must learn to coordinate their behaviour while conditioning only on their private observations. In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.

MULTI-AGENT REINFORCEMENT LEARNING STARCRAFT STARCRAFT II

MSC: A Dataset for Macro-Management in StarCraft II

9 Oct 2017wuhuikai/MSC

We also split MSC into training, validation and test set for the convenience of evaluation and comparison. Various downstream tasks and analyses of the dataset are also described for the sake of research on macro-management in StarCraft II.

STARCRAFT STARCRAFT II

Learning when to Communicate at Scale in Multiagent Cooperative and Competitive Tasks

ICLR 2019 apsdehal/gym-starcraft

Learning when to communicate and doing that effectively is essential in multi-agent tasks. Recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks.

STARCRAFT