Search Results for author: Debbie Marr

Found 6 papers, 0 papers with code

WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

no code implementations1 Mar 2018 Asit Mishra, Debbie Marr

Today's high performance deep learning architectures involve large models with numerous parameters.

Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy

no code implementations ICLR 2018 Asit Mishra, Debbie Marr

Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models.

Image Classification Knowledge Distillation +3

Low Precision RNNs: Quantizing RNNs Without Losing Accuracy

no code implementations20 Oct 2017 Supriya Kapur, Asit Mishra, Debbie Marr

Similar to convolution neural networks, recurrent neural networks (RNNs) typically suffer from over-parameterization.

Quantization

WRPN: Wide Reduced-Precision Networks

no code implementations ICLR 2018 Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr

We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.

WRPN: Training and Inference using Wide Reduced-Precision Networks

no code implementations10 Apr 2017 Asit Mishra, Jeffrey J Cook, Eriko Nurvitadhi, Debbie Marr

For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters.

Quantization

Accelerating Deep Convolutional Networks using low-precision and sparsity

no code implementations2 Oct 2016 Ganesh Venkatesh, Eriko Nurvitadhi, Debbie Marr

To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.