Re-Weighted Learning for Sparsifying Deep Neural Networks

5 Feb 2018  ·  Igor Fedorov, Bhaskar D. Rao ·

This paper addresses the topic of sparsifying deep neural networks (DNN's). While DNN's are powerful models that achieve state-of-the-art performance on a large number of tasks, the large number of model parameters poses serious storage and computational challenges. To combat these difficulties, a growing line of work focuses on pruning network weights without sacrificing performance. We propose a general affine scaling transformation (AST) algorithm to sparsify DNN's. Our approach follows in the footsteps of popular sparse recovery techniques, which have yet to be explored in the context of DNN's. We describe a principled framework for transforming densely connected DNN's into sparsely connected ones without sacrificing network performance. Unlike existing methods, our approach is able to learn sparse connections at each layer simultaneously, and achieves comparable pruning results on the architecture tested.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods