Search Results for author: Andrew J. R. Simpson

Found 18 papers, 1 papers with code

Hierarchical Conflict Propagation: Sequence Learning in a Recurrent Deep Neural Network

no code implementations25 Feb 2016 Andrew J. R. Simpson

Recurrent neural networks (RNN) are capable of learning to encode and exploit activation history over an arbitrary timescale.

Qualitative Projection Using Deep Neural Networks

no code implementations19 Oct 2015 Andrew J. R. Simpson

Deep neural networks (DNN) abstract by demodulating the output of linear filters.

Uniform Learning in a Deep Neural Network via "Oddball" Stochastic Gradient Descent

no code implementations8 Oct 2015 Andrew J. R. Simpson

When training deep neural networks, it is typically assumed that the training examples are uniformly difficult to learn.

valid

"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training Deep Neural Networks

no code implementations18 Sep 2015 Andrew J. R. Simpson

Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today.

Taming the ReLU with Parallel Dither in a Deep Neural Network

no code implementations17 Sep 2015 Andrew J. R. Simpson

Rectified Linear Units (ReLU) seem to have displaced traditional 'smooth' nonlinearities as activation-function-du-jour in many - but not all - deep neural network (DNN) applications.

Use it or Lose it: Selective Memory and Forgetting in a Perpetual Learning Machine

no code implementations10 Sep 2015 Andrew J. R. Simpson

In a recent article we described a new type of deep neural network - a Perpetual Learning Machine (PLM) - which is capable of learning 'on the fly' like a brain by existing in a state of Perpetual Stochastic Gradient Descent (PSGD).

On-the-Fly Learning in a Perpetual Learning Machine

no code implementations3 Sep 2015 Andrew J. R. Simpson

Despite the promise of brain-inspired machine learning, deep neural networks (DNN) have frustratingly failed to bridge the deceptively large gap between learning and memory.

BIG-bench Machine Learning

Parallel Dither and Dropout for Regularising Deep Neural Networks

no code implementations28 Aug 2015 Andrew J. R. Simpson

Effective regularisation during training can mean the difference between success and failure for deep neural networks.

Dither is Better than Dropout for Regularising Deep Neural Networks

no code implementations19 Aug 2015 Andrew J. R. Simpson

Regularisation of deep neural networks (DNN) during training is critical to performance.

Instant Learning: Parallel Deep Neural Networks and Convolutional Bootstrapping

no code implementations22 May 2015 Andrew J. R. Simpson

Although deep neural networks (DNN) are able to scale with direct advances in computational power (e. g., memory and processing speed), they are not well suited to exploit the recent trends for parallel architectures.

Deep Karaoke: Extracting Vocals from Musical Mixtures Using a Convolutional Deep Neural Network

1 code implementation17 Apr 2015 Andrew J. R. Simpson, Gerard Roma, Mark D. Plumbley

Identification and extraction of singing voice from within musical mixtures is a key challenge in source separation and machine audition.

Speech Separation

Deep Transform: Cocktail Party Source Separation via Complex Convolution in a Deep Neural Network

no code implementations12 Apr 2015 Andrew J. R. Simpson

Convolutional deep neural networks (DNN) are state of the art in many engineering problems but have not yet addressed the issue of how to deal with complex spectrograms.

Probabilistic Binary-Mask Cocktail-Party Source Separation in a Convolutional Deep Neural Network

no code implementations24 Mar 2015 Andrew J. R. Simpson

Separation of competing speech is a key challenge in signal processing and a feat routinely performed by the human auditory brain.

Deep Transform: Cocktail Party Source Separation via Probabilistic Re-Synthesis

no code implementations20 Mar 2015 Andrew J. R. Simpson

In cocktail party listening scenarios, the human brain is able to separate competing speech signals.

Deep Transform: Time-Domain Audio Error Correction via Probabilistic Re-Synthesis

no code implementations19 Mar 2015 Andrew J. R. Simpson

In the process of recording, storage and transmission of time-domain audio signals, errors may be introduced that are difficult to correct in an unsupervised way.

Deep Transform: Error Correction via Probabilistic Re-Synthesis

no code implementations16 Feb 2015 Andrew J. R. Simpson

Errors in data are usually unwelcome and so some means to correct them is useful.

Abstract Learning via Demodulation in a Deep Neural Network

no code implementations13 Feb 2015 Andrew J. R. Simpson

Here, we demonstrate that DNN learn abstract representations by a process of demodulation.

Over-Sampling in a Deep Neural Network

no code implementations12 Feb 2015 Andrew J. R. Simpson

Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition.

Cannot find the paper you are looking for? You can Submit a new open access paper.