Structured Multi-Hashing for Model Compression

25 Nov 2019Elad EbanYair Movshovitz-AttiasHao WuMark SandlerAndrew PoonYerlan IdelbayevMiguel A. Carreira-Perpinan

Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on low-resource devices or common server configurations in which multiple models are held in memory. Model compression methods address this limitation by reducing the memory footprint, latency, or energy consumption of a model with minimal impact on accuracy... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.