Learning Compact Networks via Adaptive Network Regularization

Deep neural networks typically feature a fixed architecture, where the number of units per layer are treated as hyperparameters and tuned during training. Recently, strategies for training adaptive neural networks without a fixed architecture have seen renewed interest. In this paper, we employ a simple regularizer on the number of hidden units in the networks, which we refer to as adaptive network regularization (ANR). This method places a penalty on the number of hidden units per layer, designed to encourage compactness and flexibility of the network architecture. This penalty acts as the sole tuning parameter over the network size, increasing simplicity during training. We describe a training strategy that grows the number of units during training, and show on several benchmark datasets that our model yields architectures that are smaller than those obtained when tuning the number of hidden units on a standard fixed architecture. Along with smaller architectures, we show on multiple datasets that our algorithm performs comparable to or better than fixed architectures learned via grid-searching over the hyperparameters. We motivate this model using small-variance asymptotics---a Bayesian neural network with a Poisson number of units per layer becomes our model in the small-variance limit.

PDF Abstract
No code implementations yet. Submit your code now



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here