Search Results for author: Kenneth O. Stanley

Found 30 papers, 19 papers with code

Evolution through Large Models

no code implementations17 Jun 2022 Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, Kenneth O. Stanley

This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP).

Language Modelling

Towards Consistent Predictive Confidence through Fitted Ensembles

no code implementations22 Jun 2021 Navid Kardan, Ankit Sharma, Kenneth O. Stanley

Furthermore, we present a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles, where overconfident predictions are rectified by transformed versions of the original classification task.

Out of Distribution (OOD) Detection

Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search

1 code implementation27 May 2020 Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, Kenneth O. Stanley

Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands of domain-specific data samples.

Neural Architecture Search

First return, then explore

2 code implementations27 Apr 2020 Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune

The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only.

Montezuma's Revenge reinforcement-learning +1

Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

1 code implementation ICML 2020 Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley

Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning.

Reinforcement Learning (RL)

Learning to Continually Learn

5 code implementations21 Feb 2020 Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Nick Cheney

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.

Continual Learning Meta-Learning

Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures

1 code implementation29 Dec 2019 Sebastian Risi, Kenneth O. Stanley

Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately instead of end-to-end.

Multiobjective Optimization

Deep Innovation Protection

no code implementations25 Sep 2019 Sebastian Risi, Kenneth O. Stanley

Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.

Multiobjective Optimization

An Overview of Open-Ended Evolution: Editorial Introduction to the Open-Ended Evolution II Special Issue

no code implementations10 Sep 2019 Norman Packard, Mark A. Bedau, Alastair Channon, Takashi Ikegami, Steen Rasmussen, Kenneth O. Stanley, Tim Taylor

Nature's spectacular inventiveness, reflected in the enormous diversity of form and function displayed by the biosphere, is a feature of life that distinguishes living most strongly from nonliving.

Artificial Life

Evolvability ES: Scalable and Direct Optimization of Evolvability

1 code implementation13 Jul 2019 Alexander Gajewski, Jeff Clune, Kenneth O. Stanley, Joel Lehman

Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances.

Evolutionary Algorithms Meta-Learning

Deep Neuroevolution of Recurrent and Discrete World Models

4 code implementations28 Apr 2019 Sebastian Risi, Kenneth O. Stanley

Instead of the relatively simple architectures employed in most RL experiments, world models rely on multiple different neural components that are responsible for visual information processing, memory, and decision-making.

Car Racing Decision Making +1

Go-Explore: a New Approach for Hard-Exploration Problems

3 code implementations30 Jan 2019 Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune

Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge.

Imitation Learning Montezuma's Revenge

Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions

2 code implementations7 Jan 2019 Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley

Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges.

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

1 code implementation3 May 2018 Rui Wang, Jeff Clune, Kenneth O. Stanley

Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems.

Data Visualization Evolutionary Algorithms +1

Differentiable plasticity: training plastic neural networks with backpropagation

5 code implementations ICML 2018 Thomas Miconi, Jeff Clune, Kenneth O. Stanley

How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?

Meta-Learning

Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents

2 code implementations NeurIPS 2018 Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O. Stanley, Jeff Clune

Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e. g. hours vs. days) because they parallelize better.

Policy Gradient Methods Q-Learning +2

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

14 code implementations18 Dec 2017 Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, Jeff Clune

Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion.

Evolutionary Algorithms Q-Learning +1

On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent

no code implementations18 Dec 2017 Xingwen Zhang, Jeff Clune, Kenneth O. Stanley

Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL).

Reinforcement Learning (RL)

Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

1 code implementation18 Dec 2017 Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley

While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks.

Artificial Life

ES Is More Than Just a Traditional Finite-Difference Approximator

no code implementations18 Dec 2017 Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley

However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation.

reinforcement-learning Reinforcement Learning (RL)

The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System

1 code implementation17 Apr 2017 Joost Huizinga, Kenneth O. Stanley, Jeff Clune

In this paper we reveal a unique system in which canalization did emerge in computational evolution.

Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks

no code implementations30 Mar 2017 Andrea Soltoggio, Kenneth O. Stanley, Sebastian Risi

Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment.

Fitted Learning: Models with Awareness of their Limits

1 code implementation7 Sep 2016 Navid Kardan, Kenneth O. Stanley

Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge.

Classification General Classification

A Proposed Infrastructure for Adding Online Interaction to Any Evolutionary Domain

no code implementations11 Jul 2014 Paul Szerlip, Kenneth O. Stanley

To address the difficulty of creating online collaborative evolutionary systems, this paper presents a new prototype library called Worldwide Infrastructure for Neuroevolution (WIN) and its accompanying site WIN Online (http://winark. org/).

Unsupervised Feature Learning through Divergent Discriminative Feature Accumulation

no code implementations6 Jun 2014 Paul A. Szerlip, Gregory Morse, Justin K. Pugh, Kenneth O. Stanley

Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set.

General Classification

Evolving Neural Networks through Augmenting Topologies

3 code implementations Evolutionary Computation 2002 2006 Kenneth O. Stanley, Risto Miikkulainen

An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.