On the Computational Efficiency of Training Neural Networks

It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization... (read more)

Results in Papers With Code
(↓ scroll down to see all results)