Winning the Lottery with Continuous Sparsification

10 Dec 2019Pedro SavareseHugo SilvaMichael Maire

The Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks that, when trained from scratch, match the performance of the dense counterpart given a comparable training budget. The proposed algorithm to search for winning tickets, Iterative Magnitude Pruning, consistently finds sparse sub-networks which train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet