Search Results for author: Sam Earle

Found 15 papers, 4 papers with code

Using Fractal Neural Networks to Play SimCity 1 and Conway's Game of Life at Variable Scales

no code implementations29 Jan 2020 Sam Earle

We introduce gym-city, a Reinforcement Learning environment that uses SimCity 1's game engine to simulate an urban environment, wherein agents might seek to optimize one or a combination of any number of city-wide metrics, on gameboards of various sizes.

Learning Controllable Content Generators

1 code implementation6 May 2021 Sam Earle, Maria Edwards, Ahmed Khalifa, Philip Bontrager, Julian Togelius

It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic.

Exploring open-ended gameplay features with Micro RollerCoaster Tycoon

no code implementations10 May 2021 Michael Cerny Green, Victoria Yen, Sam Earle, Dipika Rajesh, Maria Edwards, L. B. Soros

This paper introduces MicroRCT, a novel open source simulator inspired by the theme park sandbox game RollerCoaster Tycoon.

Evolutionary Algorithms

Illuminating Diverse Neural Cellular Automata for Level Generation

2 code implementations12 Sep 2021 Sam Earle, Justin Snider, Matthew C. Fontaine, Stefanos Nikolaidis, Julian Togelius

We present a method of generating diverse collections of neural cellular automata (NCA) to design video game levels.

Generating Diverse Indoor Furniture Arrangements

no code implementations20 Jun 2022 Ya-Chuan Hsu, Matthew C. Fontaine, Sam Earle, Maria Edwards, Julian Togelius, Stefanos Nikolaidis

To target specific diversity in the arrangements, we optimize the latent space of the GAN via a quality diversity algorithm to generate a diverse arrangement collection.

Generative Adversarial Network

Learning Controllable 3D Level Generators

1 code implementation27 Jun 2022 Zehua Jiang, Sam Earle, Michael Cerny Green, Julian Togelius

Procedural Content Generation via Reinforcement Learning (PCGRL) foregoes the need for large human-authored data-sets and allows agents to train explicitly on functional constraints, using computable, user-defined measures of quality instead of target output.

Pathfinding Neural Cellular Automata

no code implementations17 Jan 2023 Sam Earle, Ozlem Yildiz, Julian Togelius, Chinmay Hegde

As a step toward developing such networks, we hand-code and learn models for Breadth-First Search (BFS), i. e. shortest path finding, using the unified architectural framework of Neural Cellular Automata, which are iterative neural networks with equal-size inputs and outputs.

Level Generation Through Large Language Models

no code implementations11 Feb 2023 Graham Todd, Sam Earle, Muhammad Umair Nasir, Michael Cerny Green, Julian Togelius

Large Language Models (LLMs) are powerful tools, capable of leveraging their training on natural language to write stories, generate code, and answer questions.

Controllable Path of Destruction

no code implementations29 May 2023 Matthew Siper, Sam Earle, Zehua Jiang, Ahmed Khalifa, Julian Togelius

The PoD method is very data-efficient in terms of original training examples and well-suited to functional artifacts composed of categorical data, such as game levels and discrete 3D structures.

Amorphous Fortress: Observing Emergent Behavior in Multi-Agent FSMs

no code implementations22 Jun 2023 M Charity, Dipika Rajesh, Sam Earle, Julian Togelius

We introduce a system called Amorphous Fortress -- an abstract, yet spatial, open-ended artificial life simulation.

Artificial Life

Evolutionary Machine Learning and Games

no code implementations20 Nov 2023 Julian Togelius, Ahmed Khalifa, Sam Earle, Michael Cerny Green, Lisa Soros

Evolutionary machine learning (EML) has been applied to games in multiple ways, and for multiple different purposes.

Large Language Models and Games: A Survey and Roadmap

no code implementations28 Feb 2024 Roberto Gallotta, Graham Todd, Marvin Zammit, Sam Earle, Antonios Liapis, Julian Togelius, Georgios N. Yannakakis

Recent years have seen an explosive increase in research on large language models (LLMs), and accompanying public engagement on the topic.

Cannot find the paper you are looking for? You can Submit a new open access paper.