no code implementations • 13 Mar 2021 • Lech Szymanski, Brendan McCane, Craig Atkinson
We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs.
no code implementations • 10 Aug 2020 • Juncheng Liu, Steven Mills, Brendan McCane
Our network compresses a voxel grid of any size down to a very small latent space in an autoencoder-like network.
no code implementations • 16 Jan 2020 • Haitao Xu, Brendan McCane, Lech Szymanski, Craig Atkinson
We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn.
no code implementations • 27 Nov 2019 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks.
no code implementations • 31 Oct 2019 • Haitao Xu, Brendan McCane, Lech Szymanski
Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL).
no code implementations • 25 Sep 2019 • Lech Szymanski, Brendan McCane, Craig Atkinson
We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space.
no code implementations • 25 Sep 2019 • Lech Szymanski, Brendan McCane, Craig Atkinson
The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance.
no code implementations • 3 May 2019 • Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal
We present a simple effective way of achieving this by learning a generic Mahalanabis distance in a collaborative loss function in an end-to-end fashion with any standard convolutional network as the feature learner.
no code implementations • 21 Mar 2019 • Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal
We present a conditional probabilistic framework for collaborative representation of image patches.
no code implementations • 28 Jan 2019 • Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal
We present an end-to-end deep network for fine-grained visual categorization called Collaborative Convolutional Network (CoCoNet).
Fine-Grained Visual Categorization Fine-Grained Visual Recognition +1
1 code implementation • 6 Dec 2018 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
We propose a model that overcomes catastrophic forgetting in sequential reinforcement learning by combining ideas from continual learning in both the image classification domain and the reinforcement learning domain.
no code implementations • 6 Jun 2018 • Lech Szymanski, Brendan McCane, Michael Albert
We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is the number of neurons in the hidden layer $l$.
no code implementations • 8 Mar 2018 • Brendan McCane, Lech Szymanski
In this paper we introduce new bounds on the approximation of functions in deep networks and in doing so introduce some new deep network architectures for function approximation.
no code implementations • 12 Feb 2018 • Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins
In general, neural networks are not currently capable of learning tasks in a sequential fashion.
no code implementations • 25 Oct 2017 • Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal
This letter introduces the LOOP binary descriptor (local optimal oriented pattern) that encodes rotation invariance into the main formulation itself.
no code implementations • 19 Apr 2017 • Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou
Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning.
1 code implementation • 9 Mar 2017 • Brendan McCane, Lech Szymanski
We prove that a particular deep network architecture is more efficient at approximating radially symmetric functions than the best known 2 or 3 layer networks.
no code implementations • 25 Feb 2016 • Xiping Fu, Brendan McCane, Steven Mills, Michael Albert, Lech Szymanski
Binary codes can be used to speed up nearest neighbor search tasks in large scale data sets as they are efficient for both storage and retrieval.