Paper

WSNet: Compact and Efficient Networks Through Weight Sampling

We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via ad hoc processing such as model pruning or filter factorization... (read more)

Results in Papers With Code
(↓ scroll down to see all results)