1 code implementation • RC 2020 • Varun Sundar, Rajat Vadiraj Dwaraknath
For a fixed parameter count and compute budget, the proposed algorithm (RigL) claims to directly train sparse networks that match or exceed the performance of existing denseto-sparse training techniques (such as pruning).