no code implementations • 25 Oct 2024 • Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
The proliferation of smartphones and other mobile devices provides a unique opportunity to make Advanced Driver Assistance Systems (ADAS) accessible to everyone in the form of an application empowered by low-cost Machine/Deep Learning (ML/DL) models to enhance road safety.
no code implementations • 22 Sep 2024 • Niraj Pudasaini, Muhammad Abdullah Hanif, Muhammad Shafique
This varying performance at some distinct scenarios suggests that designing DL-SLAM algorithms taking operating environments and tasks in consideration can achieve optimal performance and resource efficiency for deployment in resource-constrained embedded platforms.
no code implementations • 2 Jul 2024 • Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
In the realm of deploying Machine Learning-based Advanced Driver Assistance Systems (ML-ADAS) into real-world scenarios, adverse weather conditions pose a significant challenge.
no code implementations • 6 May 2024 • Nishant Suresh Aswani, Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique
We believe a scaled down version of our approach will provide insight into the benefits and pitfalls of using TCA to study continual learning dynamics.
no code implementations • 18 Mar 2024 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique
In this paper, we introduce SSAP (Shape-Sensitive Adversarial Patch), a novel approach designed to comprehensively disrupt monocular depth estimation (MDE) in autonomous navigation applications.
no code implementations • 28 Feb 2024 • Abdul Basit, Khizar Hussain, Muhammad Abdullah Hanif, Muhammad Shafique
MedAide achieves 77\% accuracy in medical consultations and scores 56 in USMLE benchmark, enabling an energy-efficient healthcare assistance platform that alleviates privacy concerns due to edge-based deployment, thereby empowering the community.
no code implementations • 20 Nov 2023 • Nandish Chattopadhyay, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
ODDR operates through a robust three-stage pipeline: Fragmentation, Segregation, and Neutralization.
no code implementations • 16 Oct 2023 • Kamila Zaman, Alberto Marchisio, Muhammad Abdullah Hanif, Muhammad Shafique
Quantum Computing (QC) claims to improve the efficiency of solving complex problems, compared to classical computing.
no code implementations • 11 Aug 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammed Shafique
Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.
no code implementations • 6 Aug 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
In this paper, we investigate the vulnerability of MDE to adversarial patches.
no code implementations • 20 Jul 2023 • Vasileios Leon, Muhammad Abdullah Hanif, Giorgos Armeniakos, Xun Jiao, Muhammad Shafique, Kiamal Pekmestzi, Dimitrios Soudris
The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches.
no code implementations • 21 May 2023 • Muhammad Abdullah Hanif, Muhammad Shafique
To address this issue, we propose a novel Fault-Aware Quantization (FAQ) technique for mitigating the effects of stuck-at permanent faults in the on-chip weight memory of DNN accelerators at a negligible overhead cost compared to fault-aware retraining while offering comparable accuracy results.
no code implementations • CVPR 2024 • Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems.
no code implementations • 20 Apr 2023 • Muhammad Abdullah Hanif, Muhammad Shafique
To realize these concepts, in this work, we present a novel framework, eFAT, that computes the resilience of a given DNN to faults at different fault rates and with different levels of retraining, and it uses that knowledge to build a resilience map given a user-defined accuracy constraint.
no code implementations • 8 Apr 2023 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption.
no code implementations • 8 Apr 2023 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow.
no code implementations • 3 Mar 2023 • Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models.
no code implementations • 2 Mar 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique
Unlike mask based fake-weather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i. e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera.
no code implementations • 2 Mar 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
APARATE, results in a mean depth estimation error surpassing $0. 5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models.
no code implementations • 31 Jul 2022 • Muhammad Abdullah Hanif, Giuseppe Maria Sarda, Alberto Marchisio, Guido Masera, Maurizio Martina, Muhammad Shafique
The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems.
no code implementations • 18 Apr 2022 • Shail Dave, Alberto Marchisio, Muhammad Abdullah Hanif, Amira Guesmi, Aviral Shrivastava, Ihsen Alouani, Muhammad Shafique
The real-world use cases of Machine Learning (ML) have exploded over the past few years.
no code implementations • 10 Mar 2022 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
These errors can change the weight values and neuron operations in the compute engine of SNN accelerators, thereby leading to incorrect outputs and accuracy degradation.
no code implementations • 20 Sep 2021 • Muhammad Shafique, Alberto Marchisio, Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif
Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation.
no code implementations • 23 Aug 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored.
no code implementations • 26 May 2021 • Khadija Shaheen, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique
Continual learning is essential for all real-world applications, as frozen pre-trained models cannot effectively deal with non-stationary data distributions.
no code implementations • 5 May 2021 • Faiq Khalid, Muhammad Abdullah Hanif, Muhammad Shafique
From tiny pacemaker chips to aircraft collision avoidance systems, the state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely on Deep Neural Networks (DNNs).
no code implementations • 28 Feb 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
The key mechanisms of SparkXD are: (1) improving the SNN error tolerance through fault-aware training that considers bit errors from approximate DRAM, (2) analyzing the error tolerance of the improved SNN model to find the maximum tolerable bit error rate (BER) that meets the targeted accuracy constraint, and (3) energy-efficient DRAM data mapping for the resilient SNN model that maps the weights in the appropriate DRAM location to minimize the DRAM access energy.
no code implementations • 29 Jan 2021 • Muhammad Abdullah Hanif, Muhammad Shafique
We propose DNN-Life, a specialized aging analysis and mitigation framework for DNNs, which jointly exploits hardware- and software-level knowledge to improve the lifetime of a DNN weight memory with reduced energy overhead.
Quantization Hardware Architecture
no code implementations • 12 Oct 2020 • Alberto Marchisio, Vojtech Mrazek, Muhammad Abdullah Hanif, Muhammad Shafique
We analyze the corresponding on-chip memory requirements and leverage it to propose a novel methodology to explore different scratchpad memory designs and their energy/area trade-offs.
no code implementations • 21 Apr 2020 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Many convolutional neural network (CNN) accelerators face performance- and energy-efficiency challenges which are crucial for embedded implementations, due to high DRAM access latency and energy.
no code implementations • 3 Dec 2019 • Mahum Naseer, Mishal Fatima Minhas, Faiq Khalid, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique
With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems.
1 code implementation • 2 Dec 2019 • Le-Ha Hoang, Muhammad Abdullah Hanif, Muhammad Shafique
In this paper, we perform a comprehensive error resilience analysis of DNNs subjected to hardware faults (e. g., permanent faults) in the weight memory.
1 code implementation • 11 Jun 2019 • Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina, Muhammad Abdullah Hanif, Muhammad Shafique
A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized.
1 code implementation • 24 May 2019 • Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Muhammad Abdullah Hanif, Maurizio Martina, Guido Masera, Muhammad Shafique
The goal is to reduce the hardware requirements of CapsNets by removing unused/redundant connections and capsules, while keeping high accuracy through tests of different learning rate policies and batch sizes.
2 code implementations • 22 Feb 2019 • Vojtech Mrazek, Muhammad Abdullah Hanif, Zdenek Vasicek, Lukas Sekanina, Muhammad Shafique
Because these libraries contain from tens to thousands of approximate implementations for a single arithmetic operation it is intractable to find an optimal combination of approximate circuits in the library even for an application consisting of a few operations.
no code implementations • 4 Feb 2019 • Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique
We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w. r. t.
no code implementations • 4 Feb 2019 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Our experimental results show that the ROMANet saves DRAM access energy by 12% for the AlexNet, by 36% for the VGG-16, and by 46% for the MobileNet, while also improving the DRAM throughput by 10%, as compared to the state-of-the-art.
no code implementations • 4 Feb 2019 • Alberto Marchisio, Muhammad Abdullah Hanif, Mohammad Taghi Teimoori, Muhammad Shafique
By leveraging this analysis, we propose a methodology to explore different on-chip memory designs and a power-gating technique to further reduce the energy consumption, depending upon the utilization across different operations of a CapsuleNet.
1 code implementation • 29 Jan 2019 • Faiq Khalid, Hassan Ali, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image.
no code implementations • 28 Jan 2019 • Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique
Capsule Networks preserve the hierarchical spatial relationships between objects, and thereby bears a potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification.
no code implementations • 5 Nov 2018 • Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Muhammad Shafique
Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data.
1 code implementation • 4 Nov 2018 • Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image.
1 code implementation • 4 Nov 2018 • Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs).
no code implementations • 2 Nov 2018 • Alberto Marchisio, Muhammad Abdullah Hanif, Muhammad Shafique
In this paper, we propose CapsAcc, the first specialized CMOS-based hardware architecture to perform CapsuleNets inference with high performance and energy efficiency.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 2 Nov 2018 • Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference or can be identified during the validation phase.
no code implementations • 30 Oct 2018 • Muhammad Abdullah Hanif, Rachmad Vidya Wicaksana Putra, Muhammad Tanvir, Rehan Hafiz, Semeen Rehman, Muhammad Shafique
The state-of-the-art accelerators for Convolutional Neural Networks (CNNs) typically focus on accelerating only the convolutional layers, but do not prioritize the fully-connected layers much.