Search Results for author: Brendan McCane

Found 18 papers, 2 papers with code

Conceptual capacity and effective complexity of neural networks

no code implementations13 Mar 2021 Lech Szymanski, Brendan McCane, Craig Atkinson

We propose a complexity measure of a neural network mapping function based on the diversity of the set of tangent spaces from different inputs.

Diversity

RocNet: Recursive Octree Network for Efficient 3D Deep Representation

no code implementations10 Aug 2020 Juncheng Liu, Steven Mills, Brendan McCane

Our network compresses a voxel grid of any size down to a very small latent space in an autoencoder-like network.

3D Reconstruction 3D Shape Classification +1

MIME: Mutual Information Minimisation Exploration

no code implementations16 Jan 2020 Haitao Xu, Brendan McCane, Lech Szymanski, Craig Atkinson

We show that reinforcement learning agents that learn by surprise (surprisal) get stuck at abrupt environmental transition boundaries because these transitions are difficult to learn.

Montezuma's Revenge reinforcement-learning +2

GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal

no code implementations27 Nov 2019 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

Pseudo-rehearsal allows neural networks to learn a sequence of tasks without forgetting how to perform in earlier tasks.

Atari Games Continual Learning +4

VASE: Variational Assorted Surprise Exploration for Reinforcement Learning

no code implementations31 Oct 2019 Haitao Xu, Brendan McCane, Lech Szymanski

Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL).

continuous-control Continuous Control +5

Switched linear projections for neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

We introduce switched linear projections for expressing the activity of a neuron in a deep neural network in terms of a single linear projection in the input space.

Switched linear projections and inactive state sensitivity for deep neural network interpretability

no code implementations25 Sep 2019 Lech Szymanski, Brendan McCane, Craig Atkinson

The method works by isolating the active subnetwork, a series of linear transformations, that completely determine the entire computation of the deep network for a given input instance.

Distance Metric Learned Collaborative Representation Classifier

no code implementations3 May 2019 Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal

We present a simple effective way of achieving this by learning a generic Mahalanabis distance in a collaborative loss function in an end-to-end fashion with any standard convolutional network as the feature learner.

General Classification

PProCRC: Probabilistic Collaboration of Image Patches

no code implementations21 Mar 2019 Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal

We present a conditional probabilistic framework for collaborative representation of image patches.

Face Recognition

CoCoNet: A Collaborative Convolutional Network

no code implementations28 Jan 2019 Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal

We present an end-to-end deep network for fine-grained visual categorization called Collaborative Convolutional Network (CoCoNet).

Fine-Grained Visual Categorization Fine-Grained Visual Recognition +1

Pseudo-Rehearsal: Achieving Deep Reinforcement Learning without Catastrophic Forgetting

1 code implementation6 Dec 2018 Craig Atkinson, Brendan McCane, Lech Szymanski, Anthony Robins

We propose a model that overcomes catastrophic forgetting in sequential reinforcement learning by combining ideas from continual learning in both the image classification domain and the reinforcement learning domain.

Atari Games Continual Learning +4

The effect of the choice of neural network depth and breadth on the size of its hypothesis space

no code implementations6 Jun 2018 Lech Szymanski, Brendan McCane, Michael Albert

We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is the number of neurons in the hidden layer $l$.

Some Approximation Bounds for Deep Networks

no code implementations8 Mar 2018 Brendan McCane, Lech Szymanski

In this paper we introduce new bounds on the approximation of functions in deep networks and in doing so introduce some new deep network architectures for function approximation.

LOOP Descriptor: Local Optimal Oriented Pattern

no code implementations25 Oct 2017 Tapabrata Chakraborti, Brendan McCane, Steven Mills, Umapada Pal

This letter introduces the LOOP binary descriptor (local optimal oriented pattern) that encodes rotation invariance into the main formulation itself.

Effects of the optimisation of the margin distribution on generalisation in deep architectures

no code implementations19 Apr 2017 Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou

Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning.

Deep Learning

Deep Radial Kernel Networks: Approximating Radially Symmetric Functions with Deep Networks

1 code implementation9 Mar 2017 Brendan McCane, Lech Szymanski

We prove that a particular deep network architecture is more efficient at approximating radially symmetric functions than the best known 2 or 3 layer networks.

Auto-JacoBin: Auto-encoder Jacobian Binary Hashing

no code implementations25 Feb 2016 Xiping Fu, Brendan McCane, Steven Mills, Michael Albert, Lech Szymanski

Binary codes can be used to speed up nearest neighbor search tasks in large scale data sets as they are efficient for both storage and retrieval.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.