CompNet: Neural networks growing via the compact network morphism

27 Apr 2018  ·  Jun Lu, Wei Ma, Boi Faltings ·

It is often the case that the performance of a neural network can be improved by adding layers. In real-world practices, we always train dozens of neural network architectures in parallel which is a wasteful process. We explored $CompNet$, in which case we morph a well-trained neural network to a deeper one where network function can be preserved and the added layer is compact. The work of the paper makes two contributions: a). The modified network can converge fast and keep the same functionality so that we do not need to train from scratch again; b). The layer size of the added layer in the neural network is controlled by removing the redundant parameters with sparse optimization. This differs from previous network morphism approaches which tend to add more neurons or channels beyond the actual requirements and result in redundance of the model. The method is illustrated using several neural network structures on different data sets including MNIST and CIFAR10.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here