Search Results for author: Lech Szymanski

Found 13 papers, 2 papers with code

Conceptual capacity and effective complexity of neural networks

no code implementations13 Mar 2021 Lech Szymanski, Brendan McCane, Craig Atkinson

We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs.

MIME: Mutual Information Minimisation Exploration

no code implementations16 Jan 2020 Haitao Xu, Brendan McCane, Lech Szymanski, Craig Atkinson

We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn.

Montezuma's Revenge reinforcement-learning +1

GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal

no code implementations27 Nov 2019 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks.

Atari Games Continual Learning +3

VASE: Variational Assorted Surprise Exploration for Reinforcement Learning

no code implementations31 Oct 2019 Haitao Xu, Brendan McCane, Lech Szymanski

Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL).

Continuous Control Efficient Exploration +3

Switched linear projections and inactive state sensitivity for deep neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance.

Switched linear projections for neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space.

Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting

1 code implementation6 Dec 2018 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

We propose a model that overcomes catastrophic forgetting in sequential reinforcement learning by combining ideas from continual learning in both the image classification domain and the reinforcement learning domain.

Atari Games Continual Learning +3

The effect of the choice of neural network depth and breadth on the size of its hypothesis space

no code implementations6 Jun 2018 Lech Szymanski, Brendan McCane, Michael Albert

We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is the number of neurons in the hidden layer $l$.

Some Approximation Bounds for Deep Networks

no code implementations8 Mar 2018 Brendan McCane, Lech Szymanski

In this paper we introduce new bounds on the approximation of functions in deep networks and in doing so introduce some new deep network architectures for function approximation.

Effects of the optimisation of the margin distribution on generalisation in deep architectures

no code implementations19 Apr 2017 Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou

Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning.

Deep Radial Kernel Networks: Approximating Radially Symmetric Functions with Deep Networks

1 code implementation9 Mar 2017 Brendan McCane, Lech Szymanski

We prove that a particular deep network architecture is more efficient at approximating radially symmetric functions than the best known 2 or 3 layer networks.

Auto-JacoBin: Auto-encoder Jacobian Binary Hashing

no code implementations25 Feb 2016 Xiping Fu, Brendan McCane, Steven Mills, Michael Albert, Lech Szymanski

Binary codes can be used to speed up nearest neighbor search tasks in large scale data sets as they are efficient for both storage and retrieval.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.