Search Results for author: Sayeh Sharify

Found 9 papers, 0 papers with code

Mixed-Precision Quantization with Cross-Layer Dependencies

no code implementations11 Jul 2023 Zihao Deng, Xin Wang, Sayeh Sharify, Michael Orshansky

Quantization assigning the same bit-width to all layers leads to large accuracy degradation at low precision and is wasteful at high precision settings.

Quantization

Laconic Deep Learning Computing

no code implementations10 May 2018 Sayeh Sharify, Mostafa Mahmoud, Alberto Delmas Lascorz, Milos Nikolic, Andreas Moshovos

A Laconic configuration that uses a 1K-wire weight memory interface, outperforms the 2K-wire conventional accelerator by 15. 4x and is 1. 95x more energy efficient.

2k Image Classification

DPRed: Making Typical Activation and Weight Values Matter In Deep Learning Computing

no code implementations17 Apr 2018 Alberto Delmas, Sayeh Sharify, Patrick Judd, Kevin Siu, Milos Nikolic, Andreas Moshovos

The per group precisions are selected statically for the weights and dynamically by hardware for the activations.

Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How

no code implementations9 Mar 2018 Alberto Delmas, Patrick Judd, Dylan Malone Stuart, Zissis Poulos, Mostafa Mahmoud, Sayeh Sharify, Milos Nikolic, Andreas Moshovos

We show that, during inference with Convolutional Neural Networks (CNNs), more than 2x to $8x ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties.

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability

no code implementations27 Jul 2017 Alberto Delmas, Sayeh Sharify, Patrick Judd, Andreas Moshovos

Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1:90x without any loss in accuracy while it is 1:17x more energy efficient.

Image Classification

Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks

no code implementations23 Jun 2017 Sayeh Sharify, Alberto Delmas Lascorz, Kevin Siu, Patrick Judd, Andreas Moshovos

LM can trade-off accuracy for additional improvements in execution performance and energy efficiency and compares favorably to an accelerator that targeted only activation precisions.

Image Classification

Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks

no code implementations1 Jun 2017 Alberto Delmas, Patrick Judd, Sayeh Sharify, Andreas Moshovos

Stripes is a Deep Neural Network (DNN) accelerator that uses bit-serial computation to offer performance that is proportional to the fixed-point precision of the activation values.

Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network Computing

no code implementations29 Apr 2017 Patrick Judd, Alberto Delmas, Sayeh Sharify, Andreas Moshovos

We also present a modified organization that detects the activations that are deemed as ineffectual while fetching them from memory.

Cannot find the paper you are looking for? You can Submit a new open access paper.