Real-Time Strategy Games
24 papers with code • 0 benchmarks • 4 datasets
Real-Time Strategy (RTS) tasks involve training an agent to play video games with continuous gameplay and high-level macro-strategic goals such as map control, economic superiority and more.
( Image credit: Multi-platform Version of StarCraft: Brood War in a Docker Container )
Benchmarks
These leaderboards are used to track progress in Real-Time Strategy Games
Most implemented papers
The StarCraft Multi-Agent Challenge
In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap.
StarCraft II: A New Challenge for Reinforcement Learning
Finally, we present initial baseline results for canonical deep reinforcement learning agents applied to the StarCraft II domain.
Gym-$μ$RTS: Toward Affordable Full Game Real-time Strategy Games Research with Deep Reinforcement Learning
In recent years, researchers have achieved great success in applying Deep Reinforcement Learning (DRL) algorithms to Real-time Strategy (RTS) games, creating strong autonomous agents that could defeat professional players in StarCraft~II.
Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports Dataset
An important feature of the dataset is simultaneous data collection from five players, which facilitates the analysis of sensor data on a team level.
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch.
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games
In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.
MSC: A Dataset for Macro-Management in StarCraft II
We also split MSC into training, validation and test set for the convenience of evaluation and comparison.
TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game
Both TStarBot1 and TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level 8, level 9, and level 10 are cheating agents with unfair advantages such as full vision on the whole map and resource harvest boosting.
A Closer Look at Invalid Action Masking in Policy Gradient Algorithms
In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games.
Action Guidance: Getting the Best of Sparse Rewards and Shaped Rewards for Real-time Strategy Games
Training agents using Reinforcement Learning in games with sparse rewards is a challenging problem, since large amounts of exploration are required to retrieve even the first reward.