no code implementations • 10 Oct 2024 • Vignesh Sundaresha, Naresh Shanbhag
The ubiquitous deployment of deep learning systems on resource-constrained Edge devices is hindered by their high computational complexity coupled with their fragility to out-of-distribution (OOD) data, especially to naturally occurring common corruptions.
no code implementations • ECCV 2020 • Hassan Dbouk, Hetul Sanghvi, Mahesh Mehendale, Naresh Shanbhag
While various complexity reduction techniques, such as lightweight network architecture design and parameter quantization, have been successful in reducing the cost of implementing these networks, these methods have often been considered orthogonal.
no code implementations • 22 Feb 2020 • Abdulrahman Mahmoud, Siva Kumar Sastry Hari, Christopher W. Fletcher, Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B. Sullivan, Timothy Tsai, Stephen W. Keckler
As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors.
no code implementations • ICLR 2019 • Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal, Naresh Shanbhag, Kailash Gopalakrishnan
Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation.
no code implementations • ICLR 2019 • Charbel Sakr, Naresh Shanbhag
The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained computing systems.
no code implementations • ICML 2017 • Charbel Sakr, Yongjune Kim, Naresh Shanbhag
We focus on numerical precision – a key parameter defining the complexity of neural networks.
no code implementations • 3 Jul 2016 • Sai Zhang, Naresh Shanbhag
In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE).
no code implementations • 3 Jul 2016 • Charbel Sakr, Ameya Patil, Sai Zhang, Yongjune Kim, Naresh Shanbhag
Lower bounds on the data precision are derived in terms of the the desired classification accuracy and precision of the hyperparameters used in the classifier.