[Re] Rigging the Lottery: Making All Tickets Winners

RC 2020  ·  Varun Sundar, Rajat Vadiraj Dwaraknath ·

Scope of Reproducibility
For a fixed parameter count and compute budget, the proposed algorithm (RigL) claims to directly train sparse networks that match or exceed the performance of existing denseto-sparse training techniques (such as pruning). RigL does so while requiring constant Floating Point Operations (FLOPs) throughout training. The technique obtains state-ofthe-art performance on a variety of tasks, including image classification and characterlevel language-modelling.

Methodology
We implement RigL from scratch in Pytorch using boolean masks to simulate unstructured sparsity. We rely on the description provided in the original paper, and referred to the authorsʼ code for only specific implementation detail such as handling overflow in ERK initialization. We evaluate sparse training using RigL for WideResNet-22-2 on CIFAR-10 and ResNet-50 on CIFAR-100, requiring 2 hours and 6 hours respectively per training run on a GTX 1080 GPU.

Results
We reproduce RigLʼs performance on CIFAR-10 within 0.1% of the reported value. On both CIFAR-10/100, the central claim holds—given a fixed training budget, RigL surpasses existing dynamic-sparse training methods over a range of target sparsities. By training longer, the performance can match or exceed iterative pruning, while consuming constant FLOPs throughout training. We also show that there is little benefit in tuning RigLʼs hyper-parameters for every sparsity, initialization pair—the reference choice of hyperparameters is often close to optimal performance. Going beyond the original paper, we find that the optimal initialization scheme depends on the training constraint. While the Erdos-Renyi-Kernel distribution outperforms Random distribution for a fixed parameter count, for a fixed FLOP count, the latter performs better. Finally, redistributing layer-wise sparsity while training can bridge the performance gap between the two initialization schemes, but increases computational cost.

What was easy
The authors provide code for most of the experiments presented in the paper. The code was easy to run and allowed us to verify the correctness of our re-implementation. The paper also provided a thorough and clear description of the proposed algorithm without any obvious errors or confusing exposition.

What was difficult
Tuning hyperparameters involved multiple random seeds and took longer than anticipated. Verifying the correctness of a few baselines was tricky and required ensuring that the optimizerʼs gradient (or momentum) buffers were sparse (or dense) as specified by the algorithm. Compute limits restricted us from evaluating on larger datasets such as Imagenet.

Communication with original authors
We had responsive communication with the original authors, which helped clarify a few implementation and evaluation details, particularly regarding the FLOP counting procedure.

PDF Abstract RC 2020 PDF RC 2020 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here