Search Results for author: Thomas Miconi

Found 15 papers, 9 papers with code

Neural networks with differentiable structure

1 code implementation20 Jun 2016 Thomas Miconi

We test this method on recurrent neural networks applied to simple sequence prediction problems.

Learning to learn with backpropagation of Hebbian plasticity

1 code implementation8 Sep 2016 Thomas Miconi

As a result, the networks "learn how to learn" in order to solve the problem at hand: the trained networks automatically perform fast learning of unpredictable environmental features during their lifetime, expanding the range of solvable problems.

Continual Learning One-Shot Learning

The impossibility of "fairness": a generalized impossibility result for decisions

no code implementations5 Jul 2017 Thomas Miconi

Here we show that, when groups differ in prevalence of the predicted event, several intuitive, reasonable measures of fairness (probability of positive prediction given occurrence or non-occurrence; probability of occurrence given prediction or non-prediction; and ratio of predictions over occurrences for each group) are all mutually exclusive: if one of them is equal among groups, the other two must differ.

Fairness Specificity

Differentiable plasticity: training plastic neural networks with backpropagation

5 code implementations ICML 2018 Thomas Miconi, Jeff Clune, Kenneth O. Stanley

How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?

Meta-Learning

Differentiable Hebbian Consolidation for Continual Learning

no code implementations25 Sep 2019 Vithursan Thangarasa, Thomas Miconi, Graham W. Taylor

Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.

Continual Learning Permuted-MNIST +1

First-Order Preconditioning via Hypergradient Descent

1 code implementation18 Oct 2019 Ted Moskovitz, Rui Wang, Janice Lan, Sanyam Kapoor, Thomas Miconi, Jason Yosinski, Aditya Rawal

Standard gradient descent methods are susceptible to a range of issues that can impede training, such as high correlations and different scaling in parameter space. These difficulties can be addressed by second-order approaches that apply a pre-conditioning matrix to the gradient to improve convergence.

Learning to Continually Learn

5 code implementations21 Feb 2020 Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Nick Cheney

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.

Continual Learning Meta-Learning

Estimating Q(s,s') with Deep Deterministic Dynamics Gradients

1 code implementation21 Feb 2020 Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Imitation Learning Transfer Learning

Enabling Continual Learning with Differentiable Hebbian Plasticity

no code implementations30 Jun 2020 Vithursan Thangarasa, Thomas Miconi, Graham W. Taylor

Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.

Continual Learning Permuted-MNIST +1

Learning to acquire novel cognitive tasks with evolution, plasticity and meta-meta-learning

1 code implementation16 Dec 2021 Thomas Miconi

A hallmark of intelligence is the ability to autonomously learn new flexible, cognitive behaviors - that is, behaviors where the appropriate action depends not just on immediate stimuli (as in simple reflexive stimulus-response associations), but on contextual information that must be adequately acquired, stored and processed for each new instance of the task.

Meta-Learning

Procedural generation of meta-reinforcement learning tasks

1 code implementation11 Feb 2023 Thomas Miconi

The parametrization allows us to randomly generate an arbitrary number of novel simple meta-learning tasks.

Meta-Learning Meta Reinforcement Learning +2

Brain-inspired learning in artificial neural networks: a review

no code implementations18 May 2023 Samuel Schmidgall, Jascha Achterberg, Thomas Miconi, Louis Kirsch, Rojin Ziaei, S. Pardis Hajiseyedrazi, Jason Eshraghian

Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics.

Estimating Q(s,s') with Deterministic Dynamics Gradients

no code implementations ICML 2020 Ashley Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of a value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.