Search Results for author: Naresh Shanbhag

Found 7 papers, 0 papers with code

DBQ: A Differentiable Branch Quantizer for Lightweight Deep Neural Networks

no code implementations ECCV 2020 Hassan Dbouk, Hetul Sanghvi, Mahesh Mehendale, Naresh Shanbhag

While various complexity reduction techniques, such as lightweight network architecture design and parameter quantization, have been successful in reducing the cost of implementing these networks, these methods have often been considered orthogonal.

Quantization

HarDNN: Feature Map Vulnerability Evaluation in CNNs

no code implementations22 Feb 2020 Abdulrahman Mahmoud, Siva Kumar Sastry Hari, Christopher W. Fletcher, Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B. Sullivan, Timothy Tsai, Stephen W. Keckler

As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors.

Decision Making

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

no code implementations ICLR 2019 Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal, Naresh Shanbhag, Kailash Gopalakrishnan

Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation.

Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

no code implementations ICLR 2019 Charbel Sakr, Naresh Shanbhag

The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained computing systems.

Quantization

Analytical Guarantees on Numerical Precision of Deep Neural Networks

no code implementations ICML 2017 Charbel Sakr, Yongjune Kim, Naresh Shanbhag

We focus on numerical precision – a key parameter defining the complexity of neural networks.

Understanding the Energy and Precision Requirements for Online Learning

no code implementations3 Jul 2016 Charbel Sakr, Ameya Patil, Sai Zhang, Yongjune Kim, Naresh Shanbhag

Lower bounds on the data precision are derived in terms of the the desired classification accuracy and precision of the hyperparameters used in the classifier.

General Classification

Error-Resilient Machine Learning in Near Threshold Voltage via Classifier Ensemble

no code implementations3 Jul 2016 Sai Zhang, Naresh Shanbhag

In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE).

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.