It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic.
We present a new concept called Game Mechanic Alignment theory as a way to organize game mechanics through the lens of systemic rewards and agential motivations.
This article surveys the various deep learning methods that have been applied to generate game content directly or indirectly, discusses deep learning methods that could be used for content generation purposes but are rarely used today, and envisages some limitations and potential future directions of deep learning for procedural content generation.
Generative adversarial networks (GANs) are quickly becoming a ubiquitous approach to procedurally generating video game levels.
This paper presents a level generation method for Super Mario by stitching together pre-generated "scenes" that contain specific mechanics, using mechanic-sequences from agent playthroughs as input specifications.
Deep Reinforcement Learning (DRL) has shown impressive performance on domains with visual inputs, in particular various games.
Theresults demonstrate that the new approach does not only gen-erate a larger number of levels that are playable but also gen-erates fewer duplicate levels compared to a standard GAN.
In a user study, human-identified mechanics are compared against system-identified critical mechanics to verify alignment between humans and the system.
Deep reinforcement learning has learned to play many games well, but failed on others.
Quality-diversity (QD) algorithms search for a set of good solutions which cover a space as defined by behavior metrics.
We introduce the General Video Game Rule Generation problem, and the eponymous software framework which will be used in a new track of the General Video Game AI (GVGAI) competition.
This paper presents a two-step generative approach for creating dungeons in the rogue-like puzzle game MiniDungeons 2.
Elimination is a word puzzle game for browsers and mobile devices, where all levels are generated by a constrained evolutionary algorithm with no human intervention.
Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment.
This paper introduces an information-theoretic method for selecting a subset of problems which gives the most information about a group of problem-solving algorithms.
This paper introduces a fully automatic method for generating video game tutorials.
However, when neural networks are trained in a fixed environment, such as a single level in a video game, they will usually overfit and fail to generalize to new levels.
We describe a search-based approach to generating new levels for bullet hell games, which are action games characterized by and requiring avoidance of a very large amount of projectiles.
We propose the problem of tutorial generation for games, i. e. to generate tutorials which can teach players to play games, as an AI problem.
In 2014, The General Video Game AI (GVGAI) competition framework was created and released with the purpose of providing researchers a common open-source and easy to use platform for testing their AI methods with potentially infinity of games created using Video Game Description Language (VGDL).
DeepTingle is a text prediction and classification system trained on the collected works of the renowned fantastic gay erotica author Chuck Tingle.