Paper

Exploiting Non-Linear Redundancy for Neural Model Compression

Deploying deep learning models, comprising of non-linear combination of millions, even billions, of parameters is challenging given the memory, power and compute constraints of the real world. This situation has led to research into model compression techniques most of which rely on suboptimal heuristics and do not consider the parameter redundancies due to linear dependence between neuron activations in overparametrized networks... (read more)

Results in Papers With Code
(↓ scroll down to see all results)