1 code implementation • 23 Dec 2024 • Akarsh Kumar, Chris Lu, Louis Kirsch, Yujin Tang, Kenneth O. Stanley, Phillip Isola, David Ha
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields.
no code implementations • 17 Jun 2022 • Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, Kenneth O. Stanley
This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP).
no code implementations • 22 Jun 2021 • Navid Kardan, Ankit Sharma, Kenneth O. Stanley
Furthermore, we present a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles, where overconfident predictions are rectified by transformed versions of the original classification task.
1 code implementation • 27 May 2020 • Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, Kenneth O. Stanley
Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluating the network with thousands of domain-specific data samples.
2 code implementations • 27 Apr 2020 • Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only.
Ranked #1 on Atari Games on Atari 2600 Montezuma's Revenge
1 code implementation • 25 Mar 2020 • Jiale Zhi, Rui Wang, Jeff Clune, Kenneth O. Stanley
Recent advances in machine learning are consistently enabled by increasing amounts of computation.
1 code implementation • ICML 2020 • Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning.
no code implementations • ICLR 2019 • Thomas Miconi, Aditya Rawal, Jeff Clune, Kenneth O. Stanley
We show that neuromodulated plasticity improves the performance of neural networks on both reinforcement learning and supervised learning tasks.
5 code implementations • 21 Feb 2020 • Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Nick Cheney
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.
1 code implementation • 29 Dec 2019 • Sebastian Risi, Kenneth O. Stanley
Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately instead of end-to-end.
3 code implementations • 17 Dec 2019 • Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune
This paper introduces GTNs, discusses their potential, and showcases that they can substantially accelerate learning.
no code implementations • 25 Sep 2019 • Sebastian Risi, Kenneth O. Stanley
Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels.
no code implementations • 10 Sep 2019 • Norman Packard, Mark A. Bedau, Alastair Channon, Takashi Ikegami, Steen Rasmussen, Kenneth O. Stanley, Tim Taylor
Nature's spectacular inventiveness, reflected in the enormous diversity of form and function displayed by the biosphere, is a feature of life that distinguishes living most strongly from nonliving.
1 code implementation • 13 Jul 2019 • Alexander Gajewski, Jeff Clune, Kenneth O. Stanley, Joel Lehman
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances.
4 code implementations • 28 Apr 2019 • Sebastian Risi, Kenneth O. Stanley
Instead of the relatively simple architectures employed in most RL experiments, world models rely on multiple different neural components that are responsible for visual information processing, memory, and decision-making.
3 code implementations • 30 Jan 2019 • Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge.
Ranked #1 on Atari Games on Atari 2600 Pitfall!
2 code implementations • 7 Jan 2019 • Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley
Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges.
1 code implementation • 3 May 2018 • Rui Wang, Jeff Clune, Kenneth O. Stanley
Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems.
5 code implementations • ICML 2018 • Thomas Miconi, Jeff Clune, Kenneth O. Stanley
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?
no code implementations • 9 Mar 2018 • Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stephane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Nguyen, Charles Ofria, Marc Parizeau, David Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, Jason Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them.
14 code implementations • 18 Dec 2017 • Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion.
no code implementations • 18 Dec 2017 • Xingwen Zhang, Jeff Clune, Kenneth O. Stanley
Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL).
1 code implementation • 18 Dec 2017 • Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley
While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks.
no code implementations • 18 Dec 2017 • Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley
However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation.
2 code implementations • NeurIPS 2018 • Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O. Stanley, Jeff Clune
Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e. g. hours vs. days) because they parallelize better.
1 code implementation • 17 Apr 2017 • Joost Huizinga, Kenneth O. Stanley, Jeff Clune
In this paper we reveal a unique system in which canalization did emerge in computational evolution.
no code implementations • 30 Mar 2017 • Andrea Soltoggio, Kenneth O. Stanley, Sebastian Risi
Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment.
1 code implementation • 7 Sep 2016 • Navid Kardan, Kenneth O. Stanley
Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge.
no code implementations • 11 Jul 2014 • Paul Szerlip, Kenneth O. Stanley
To address the difficulty of creating online collaborative evolutionary systems, this paper presents a new prototype library called Worldwide Infrastructure for Neuroevolution (WIN) and its accompanying site WIN Online (http://winark. org/).
no code implementations • 6 Jun 2014 • Paul A. Szerlip, Gregory Morse, Justin K. Pugh, Kenneth O. Stanley
Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set.
3 code implementations • Evolutionary Computation 2002 2006 • Kenneth O. Stanley, Risto Miikkulainen
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights.