Search Results for author: Craig Atkinson

Found 7 papers, 1 papers with code

Conceptual capacity and effective complexity of neural networks

no code implementations13 Mar 2021 Lech Szymanski, Brendan McCane, Craig Atkinson

We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs.

Diversity

MIME: Mutual Information Minimisation Exploration

no code implementations16 Jan 2020 Haitao Xu, Brendan McCane, Lech Szymanski, Craig Atkinson

We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn.

Montezuma's Revenge reinforcement-learning +2

GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal

no code implementations27 Nov 2019 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks.

Atari Games Continual Learning +4

Switched linear projections and inactive state sensitivity for deep neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance.

Switched linear projections for neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space.

Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting

1 code implementation6 Dec 2018 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

We propose a model that overcomes catastrophic forgetting in sequential reinforcement learning by combining ideas from continual learning in both the image classification domain and the reinforcement learning domain.

Atari Games Continual Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.