# Gradient $\ell_1$ Regularization for Quantization Robustness

Milad AlizadehArash BehboodiMart van BaalenChristos LouizosTijmen BlankevoortMax Welling

We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change... (read more)

PDF Abstract

# Code Add Remove

No code implementations yet. Submit your code now