Search Results for author: Anand Dhoot

Found 3 papers, 0 papers with code

Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators

no code implementations9 Jan 2020 Noah Gamboa, Kais Kudrolli, Anand Dhoot, Ardavan Pedram

This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs.

Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training

no code implementations25 Sep 2019 Noah Gamboa, Kais Kudrolli, Anand Dhoot, Ardavan Pedram

We show that our method creates a sparse version of ResNet50 and ResNet50v1. 5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss.

Cannot find the paper you are looking for? You can Submit a new open access paper.