Over-Sampling in a Deep Neural Network

12 Feb 2015  ·  Andrew J. R. Simpson ·

Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition. A key factor in the success of the DNN is scalability - bigger networks work better. However, the reason for this scalability is not yet well understood. Here, we interpret the DNN as a discrete system, of linear filters followed by nonlinear activations, that is subject to the laws of sampling theory. In this context, we demonstrate that over-sampled networks are more selective, learn faster and learn more robustly. Our findings may ultimately generalize to the human brain.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here