L0 Regularization Based Neural Network Design and Compression

31 May 2019  ·  S. Asim Ahmed ·

We consider complexity of Deep Neural Networks (DNNs) and their associated massive over-parameterization. Such over-parametrization may entail susceptibility to adversarial attacks, loss of interpretability and adverse Size, Weight and Power - Cost (SWaP-C) considerations. We ask if there are methodical ways (regularization) to reduce complexity and how can we interpret trade-off between desired metric and complexity of DNN. Reducing complexity is directly applicable to scaling of AI applications to real world problems (especially for off-the-cloud applications). We show that presence and evaluation of the knee of the tradeoff curve. We apply a form of L0 regularization to MNIST data and signal modulation classifications. We show that such regularization captures saliency in the input space as well.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods