Paper

Winning Isn't Everything: Enhancing Game Development with Intelligent Agents

Recently, there have been several high-profile achievements of agents learning to play games against humans and beat them. In this paper, we study the problem of training intelligent agents in service of game development. Unlike the agents built to "beat the game", our agents aim to produce human-like behavior to help with game evaluation and balancing. We discuss two fundamental metrics based on which we measure the human-likeness of agents, namely skill and style, which are multi-faceted concepts with practical implications outlined in this paper. We report four case studies in which the style and skill requirements inform the choice of algorithms and metrics used to train agents; ranging from A* search to state-of-the-art deep reinforcement learning. We, further, show that the learning potential of state-of-the-art deep RL models does not seamlessly transfer from the benchmark environments to target ones without heavily tuning their hyperparameters, leading to linear scaling of the engineering efforts and computational cost with the number of target domains.

Results in Papers With Code
(↓ scroll down to see all results)