no code implementations • 1 Mar 2018 • Asit Mishra, Debbie Marr
Today's high performance deep learning architectures involve large models with numerous parameters.
no code implementations • ICLR 2018 • Asit Mishra, Debbie Marr
Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models.
no code implementations • 20 Oct 2017 • Supriya Kapur, Asit Mishra, Debbie Marr
Similar to convolution neural networks, recurrent neural networks (RNNs) typically suffer from over-parameterization.
no code implementations • ICLR 2018 • Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.
no code implementations • 10 Apr 2017 • Asit Mishra, Jeffrey J Cook, Eriko Nurvitadhi, Debbie Marr
For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters.
no code implementations • 2 Oct 2016 • Ganesh Venkatesh, Eriko Nurvitadhi, Debbie Marr
To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values.