Search Results for author: Mansheej Paul

Found 4 papers, 2 papers with code

Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?

no code implementations6 Oct 2022 Mansheej Paul, Feng Chen, Brett W. Larsen, Jonathan Frankle, Surya Ganguli, Gintare Karolina Dziugaite

Third, we show how the flatness of the error landscape at the end of training determines a limit on the fraction of weights that can be pruned at each iteration of IMP.

Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks

1 code implementation2 Jun 2022 Mansheej Paul, Brett W. Larsen, Surya Ganguli, Jonathan Frankle, Gintare Karolina Dziugaite

A striking observation about iterative magnitude pruning (IMP; Frankle et al. 2020) is that $\unicode{x2014}$ after just a few hundred steps of dense training $\unicode{x2014}$ the method can find a sparse sub-network that can be trained to the same accuracy as the dense network.

Deep Learning on a Data Diet: Finding Important Examples Early in Training

1 code implementation NeurIPS 2021 Mansheej Paul, Surya Ganguli, Gintare Karolina Dziugaite

Compared to recent work that prunes data by discarding examples that are rarely forgotten over the course of training, our scores use only local information early in training.

Cannot find the paper you are looking for? You can Submit a new open access paper.