no code implementations • 13 Mar 2021 • Lech Szymanski, Brendan McCane, Craig Atkinson
We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs.
no code implementations • 16 Jan 2020 • Haitao Xu, Brendan McCane, Lech Szymanski, Craig Atkinson
We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn.
no code implementations • 27 Nov 2019 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks.
no code implementations • 25 Sep 2019 • Lech Szymanski, Brendan McCane, Craig Atkinson
The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance.
no code implementations • 25 Sep 2019 • Lech Szymanski, Brendan McCane, Craig Atkinson
We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space.
1 code implementation • 6 Dec 2018 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
We propose a model that overcomes catastrophic forgetting in sequential reinforcement learning by combining ideas from continual learning in both the image classification domain and the reinforcement learning domain.
no code implementations • 12 Feb 2018 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
In general, neural networks are not currently capable of learning tasks in a sequential fashion.