Refining the Structure of Neural Networks Using Matrix Conditioning

6 Aug 2019  ·  Roozbeh Yousefzadeh, Dianne P. O'Leary ·

Deep learning models have proven to be exceptionally useful in performing many machine learning tasks. However, for each new dataset, choosing an effective size and structure of the model can be a time-consuming process of trial and error. While a small network with few neurons might not be able to capture the intricacies of a given task, having too many neurons can lead to overfitting and poor generalization. Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down. Results on sample image and non-image datasets demonstrate that our method results in small networks with high accuracies. Finally, guided by matrix conditioning, we provide a method to effectively squeeze models that are already trained. Our techniques reduce the human cost of designing deep learning models and can also reduce training time and the expense of using neural networks for applications.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here