no code implementations • 25 Feb 2016 • Andrew J. R. Simpson
Recurrent neural networks (RNN) are capable of learning to encode and exploit activation history over an arbitrary timescale.
no code implementations • 19 Oct 2015 • Andrew J. R. Simpson
Deep neural networks (DNN) abstract by demodulating the output of linear filters.
no code implementations • 8 Oct 2015 • Andrew J. R. Simpson
When training deep neural networks, it is typically assumed that the training examples are uniformly difficult to learn.
no code implementations • 18 Sep 2015 • Andrew J. R. Simpson
Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today.
no code implementations • 17 Sep 2015 • Andrew J. R. Simpson
Rectified Linear Units (ReLU) seem to have displaced traditional 'smooth' nonlinearities as activation-function-du-jour in many - but not all - deep neural network (DNN) applications.
no code implementations • 10 Sep 2015 • Andrew J. R. Simpson
In a recent article we described a new type of deep neural network - a Perpetual Learning Machine (PLM) - which is capable of learning 'on the fly' like a brain by existing in a state of Perpetual Stochastic Gradient Descent (PSGD).
no code implementations • 3 Sep 2015 • Andrew J. R. Simpson
Despite the promise of brain-inspired machine learning, deep neural networks (DNN) have frustratingly failed to bridge the deceptively large gap between learning and memory.
no code implementations • 28 Aug 2015 • Andrew J. R. Simpson
Effective regularisation during training can mean the difference between success and failure for deep neural networks.
no code implementations • 19 Aug 2015 • Andrew J. R. Simpson
Regularisation of deep neural networks (DNN) during training is critical to performance.
no code implementations • 22 May 2015 • Andrew J. R. Simpson
Although deep neural networks (DNN) are able to scale with direct advances in computational power (e. g., memory and processing speed), they are not well suited to exploit the recent trends for parallel architectures.
1 code implementation • 17 Apr 2015 • Andrew J. R. Simpson, Gerard Roma, Mark D. Plumbley
Identification and extraction of singing voice from within musical mixtures is a key challenge in source separation and machine audition.
no code implementations • 12 Apr 2015 • Andrew J. R. Simpson
Convolutional deep neural networks (DNN) are state of the art in many engineering problems but have not yet addressed the issue of how to deal with complex spectrograms.
no code implementations • 24 Mar 2015 • Andrew J. R. Simpson
Separation of competing speech is a key challenge in signal processing and a feat routinely performed by the human auditory brain.
no code implementations • 20 Mar 2015 • Andrew J. R. Simpson
In cocktail party listening scenarios, the human brain is able to separate competing speech signals.
no code implementations • 19 Mar 2015 • Andrew J. R. Simpson
In the process of recording, storage and transmission of time-domain audio signals, errors may be introduced that are difficult to correct in an unsupervised way.
no code implementations • 16 Feb 2015 • Andrew J. R. Simpson
Errors in data are usually unwelcome and so some means to correct them is useful.
no code implementations • 13 Feb 2015 • Andrew J. R. Simpson
Here, we demonstrate that DNN learn abstract representations by a process of demodulation.
no code implementations • 12 Feb 2015 • Andrew J. R. Simpson
Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition.