It is well-known that neural networks are computationally hard to train. On
the other hand, in practice, modern day neural networks are trained efficiently
using SGD and a variety of tricks that include different activation functions
(e.g. ReLU), over-specification (i.e., train networks which are larger than
needed), and regularization...
In this paper we revisit the computational
complexity of training neural networks from a modern perspective. We provide
both positive and negative results, some of them yield new provably efficient
and practical algorithms for training certain types of neural networks.