Search Results for author: Asit Mishra

Found 7 papers, 1 papers with code

Accelerating Sparse Deep Neural Networks

2 code implementations16 Apr 2021 Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, Paulius Micikevicius

We present the design and behavior of Sparse Tensor Cores, which exploit a 2:4 (50%) sparsity pattern that leads to twice the math throughput of dense matrix units.

Math

Exploration of Low Numeric Precision Deep Learning Inference Using Intel FPGAs

no code implementations12 Jun 2018 Philip Colangelo, Nasibeh Nasiri, Asit Mishra, Eriko Nurvitadhi, Martin Margala, Kevin Nealis

This results in a trade-off between throughput and accuracy and can be tailored for different networks through various combinations of activation and weight data widths.

Distributed, Parallel, and Cluster Computing Hardware Architecture

WRPN & Apprentice: Methods for Training and Inference using Low-Precision Numerics

no code implementations1 Mar 2018 Asit Mishra, Debbie Marr

Today's high performance deep learning architectures involve large models with numerous parameters.

Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy

no code implementations ICLR 2018 Asit Mishra, Debbie Marr

Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models.

Image Classification Knowledge Distillation +3

Low Precision RNNs: Quantizing RNNs Without Losing Accuracy

no code implementations20 Oct 2017 Supriya Kapur, Asit Mishra, Debbie Marr

Similar to convolution neural networks, recurrent neural networks (RNNs) typically suffer from over-parameterization.

Quantization

WRPN: Wide Reduced-Precision Networks

no code implementations ICLR 2018 Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr

We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.

WRPN: Training and Inference using Wide Reduced-Precision Networks

no code implementations10 Apr 2017 Asit Mishra, Jeffrey J Cook, Eriko Nurvitadhi, Debbie Marr

For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.