Search Results for author: Thomas Miconi

Found 13 papers, 6 papers with code

Estimating Q(s,s') with Deterministic Dynamics Gradients

no code implementations ICML 2020 Ashley Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of a value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Transfer Learning

Learning to acquire novel cognitive tasks with evolution, plasticity and meta-meta-learning

no code implementations16 Dec 2021 Thomas Miconi

A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require memorizing and exploiting a certain information item for each new instance of the task.

Meta-Learning

Enabling Continual Learning with Differentiable Hebbian Plasticity

no code implementations30 Jun 2020 Vithursan Thangarasa, Thomas Miconi, Graham W. Taylor

Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.

Continual Learning Permuted-MNIST +1

Learning to Continually Learn

5 code implementations21 Feb 2020 Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune, Nick Cheney

Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it.

Continual Learning Meta-Learning

Estimating Q(s,s') with Deep Deterministic Dynamics Gradients

1 code implementation21 Feb 2020 Ashley D. Edwards, Himanshu Sahni, Rosanne Liu, Jane Hung, Ankit Jain, Rui Wang, Adrien Ecoffet, Thomas Miconi, Charles Isbell, Jason Yosinski

In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter.

Imitation Learning Transfer Learning

First-Order Preconditioning via Hypergradient Descent

1 code implementation18 Oct 2019 Ted Moskovitz, Rui Wang, Janice Lan, Sanyam Kapoor, Thomas Miconi, Jason Yosinski, Aditya Rawal

Standard gradient descent methods are susceptible to a range of issues that can impede training, such as high correlations and different scaling in parameter space. These difficulties can be addressed by second-order approaches that apply a pre-conditioning matrix to the gradient to improve convergence.

Differentiable Hebbian Consolidation for Continual Learning

no code implementations25 Sep 2019 Vithursan Thangarasa, Thomas Miconi, Graham W. Taylor

Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge.

Continual Learning Permuted-MNIST +1

Differentiable plasticity: training plastic neural networks with backpropagation

5 code implementations ICML 2018 Thomas Miconi, Jeff Clune, Kenneth O. Stanley

How can we build agents that keep learning from experience, quickly and efficiently, after their initial training?

Meta-Learning

The impossibility of "fairness": a generalized impossibility result for decisions

no code implementations5 Jul 2017 Thomas Miconi

Here we show that, when groups differ in prevalence of the predicted event, several intuitive, reasonable measures of fairness (probability of positive prediction given occurrence or non-occurrence; probability of occurrence given prediction or non-prediction; and ratio of predictions over occurrences for each group) are all mutually exclusive: if one of them is equal among groups, the other two must differ.

Fairness

Learning to learn with backpropagation of Hebbian plasticity

no code implementations8 Sep 2016 Thomas Miconi

As a result, the networks "learn how to learn" in order to solve the problem at hand: the trained networks automatically perform fast learning of unpredictable environmental features during their lifetime, expanding the range of solvable problems.

Continual Learning One-Shot Learning

Neural networks with differentiable structure

1 code implementation20 Jun 2016 Thomas Miconi

We test this method on recurrent neural networks applied to simple sequence prediction problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.