Search Results for author: Muhammad Abdullah Hanif

Found 42 papers, 6 papers with code

SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications

no code implementations18 Mar 2024 Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique

In this paper, we introduce SSAP (Shape-Sensitive Adversarial Patch), a novel approach designed to comprehensively disrupt monocular depth estimation (MDE) in autonomous navigation applications.

Autonomous Driving Autonomous Navigation +2

MedAide: Leveraging Large Language Models for On-Premise Medical Assistance on Edge Devices

no code implementations28 Feb 2024 Abdul Basit, Khizar Hussain, Muhammad Abdullah Hanif, Muhammad Shafique

MedAide achieves 77\% accuracy in medical consultations and scores 56 in USMLE benchmark, enabling an energy-efficient healthcare assistance platform that alleviates privacy concerns due to edge-based deployment, thereby empowering the community.

Chatbot Edge-computing

ODDR: Outlier Detection & Dimension Reduction Based Defense Against Adversarial Patches

no code implementations20 Nov 2023 Nandish Chattopadhyay, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique

ODDR employs a three-stage pipeline: Fragmentation, Segregation, and Neutralization, providing a model-agnostic solution applicable to both image classification and object detection tasks.

Dimensionality Reduction Image Classification +3

Physical Adversarial Attacks For Camera-based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook

no code implementations11 Aug 2023 Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammed Shafique

Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.

Adversarial Attack Depth Estimation +2

Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications

no code implementations20 Jul 2023 Vasileios Leon, Muhammad Abdullah Hanif, Giorgos Armeniakos, Xun Jiao, Muhammad Shafique, Kiamal Pekmestzi, Dimitrios Soudris

The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches.

FAQ: Mitigating the Impact of Faults in the Weight Memory of DNN Accelerators through Fault-Aware Quantization

no code implementations21 May 2023 Muhammad Abdullah Hanif, Muhammad Shafique

To address this issue, we propose a novel Fault-Aware Quantization (FAQ) technique for mitigating the effects of stuck-at permanent faults in the on-chip weight memory of DNN accelerators at a negligible overhead cost compared to fault-aware retraining while offering comparable accuracy results.

Quantization

DAP: A Dynamic Adversarial Patch for Evading Person Detectors

no code implementations19 May 2023 Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems.

eFAT: Improving the Effectiveness of Fault-Aware Training for Mitigating Permanent Faults in DNN Hardware Accelerators

no code implementations20 Apr 2023 Muhammad Abdullah Hanif, Muhammad Shafique

To realize these concepts, in this work, we present a novel framework, eFAT, that computes the resilience of a given DNN to faults at different fault rates and with different levels of retraining, and it uses that knowledge to build a resilience map given a user-defined accuracy constraint.

RescueSNN: Enabling Reliable Executions on Spiking Neural Network Accelerators under Permanent Faults

no code implementations8 Apr 2023 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow.

EnforceSNN: Enabling Resilient and Energy-Efficient Spiking Neural Network Inference considering Approximate DRAMs for Embedded Systems

no code implementations8 Apr 2023 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption.

Exploring Machine Learning Privacy/Utility trade-off from a hyperparameters Lens

no code implementations3 Mar 2023 Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models.

Privacy Preserving

APARATE: Adaptive Adversarial Patch for CNN-based Monocular Depth Estimation for Autonomous Navigation

no code implementations2 Mar 2023 Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique

APARATE, results in a mean depth estimation error surpassing $0. 5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models.

Autonomous Driving Autonomous Navigation +3

AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision Systems

no code implementations2 Mar 2023 Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique

Unlike mask based fake-weather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i. e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera.

Adversarial Attack Autonomous Vehicles

CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks

no code implementations31 Jul 2022 Muhammad Abdullah Hanif, Giuseppe Maria Sarda, Alberto Marchisio, Guido Masera, Maurizio Martina, Muhammad Shafique

The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems.

Quantization

SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors

no code implementations10 Mar 2022 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

These errors can change the weight values and neuron operations in the compute engine of SNN accelerators, thereby leading to incorrect outputs and accuracy degradation.

Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework

no code implementations20 Sep 2021 Muhammad Shafique, Alberto Marchisio, Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif

Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation.

Quantization

ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks considering Unreliable Memories

no code implementations23 Aug 2021 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored.

Continual Learning for Real-World Autonomous Systems: Algorithms, Challenges and Frameworks

no code implementations26 May 2021 Khadija Shaheen, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique

Continual learning is essential for all real-world applications, as frozen pre-trained models cannot effectively deal with non-stationary data distributions.

Continual Learning

Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks

no code implementations5 May 2021 Faiq Khalid, Muhammad Abdullah Hanif, Muhammad Shafique

From tiny pacemaker chips to aircraft collision avoidance systems, the state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely on Deep Neural Networks (DNNs).

Collision Avoidance

SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM

no code implementations28 Feb 2021 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

The key mechanisms of SparkXD are: (1) improving the SNN error tolerance through fault-aware training that considers bit errors from approximate DRAM, (2) analyzing the error tolerance of the improved SNN model to find the maximum tolerable bit error rate (BER) that meets the targeted accuracy constraint, and (3) energy-efficient DRAM data mapping for the resilient SNN model that maps the weights in the appropriate DRAM location to minimize the DRAM access energy.

DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware Architectures

no code implementations29 Jan 2021 Muhammad Abdullah Hanif, Muhammad Shafique

We propose DNN-Life, a specialized aging analysis and mitigation framework for DNNs, which jointly exploits hardware- and software-level knowledge to improve the lifetime of a DNN weight memory with reduced energy overhead.

Quantization Hardware Architecture

DESCNet: Developing Efficient Scratchpad Memories for Capsule Network Hardware

no code implementations12 Oct 2020 Alberto Marchisio, Vojtech Mrazek, Muhammad Abdullah Hanif, Muhammad Shafique

We analyze the corresponding on-chip memory requirements and leverage it to propose a novel methodology to explore different scratchpad memory designs and their energy/area trade-offs.

Management

DRMap: A Generic DRAM Data Mapping Policy for Energy-Efficient Processing of Convolutional Neural Networks

no code implementations21 Apr 2020 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Many convolutional neural network (CNN) accelerators face performance- and energy-efficiency challenges which are crucial for embedded implementations, due to high DRAM access latency and energy.

FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

no code implementations3 Dec 2019 Mahum Naseer, Mishal Fatima Minhas, Faiq Khalid, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique

With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems.

General Classification

FT-ClipAct: Resilience Analysis of Deep Neural Networks and Improving their Fault Tolerance using Clipped Activation

no code implementations2 Dec 2019 Le-Ha Hoang, Muhammad Abdullah Hanif, Muhammad Shafique

In this paper, we perform a comprehensive error resilience analysis of DNNs subjected to hardware faults (e. g., permanent faults) in the weight memory.

Autonomous Driving General Classification

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

1 code implementation11 Jun 2019 Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina, Muhammad Abdullah Hanif, Muhammad Shafique

A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized.

Multiobjective Optimization

FasTrCaps: An Integrated Framework for Fast yet Accurate Training of Capsule Networks

1 code implementation24 May 2019 Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Muhammad Abdullah Hanif, Maurizio Martina, Guido Masera, Muhammad Shafique

The goal is to reduce the hardware requirements of CapsNets by removing unused/redundant connections and capsules, while keeping high accuracy through tests of different learning rate policies and batch sizes.

Image Classification Object Detection

autoAx: An Automatic Design Space Exploration and Circuit Building Methodology utilizing Libraries of Approximate Components

2 code implementations22 Feb 2019 Vojtech Mrazek, Muhammad Abdullah Hanif, Zdenek Vasicek, Lukas Sekanina, Muhammad Shafique

Because these libraries contain from tens to thousands of approximate implementations for a single arithmetic operation it is intractable to find an optimal combination of approximate circuits in the library even for an application consisting of a few operations.

Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

no code implementations4 Feb 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w. r. t.

Data Poisoning

ROMANet: Fine-Grained Reuse-Driven Off-Chip Memory Access Management and Data Organization for Deep Neural Network Accelerators

no code implementations4 Feb 2019 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Our experimental results show that the ROMANet saves DRAM access energy by 12% for the AlexNet, by 36% for the VGG-16, and by 46% for the MobileNet, while also improving the DRAM throughput by 10%, as compared to the state-of-the-art.

Management Scheduling

CapStore: Energy-Efficient Design and Management of the On-Chip Memory for CapsuleNet Inference Accelerators

no code implementations4 Feb 2019 Alberto Marchisio, Muhammad Abdullah Hanif, Mohammad Taghi Teimoori, Muhammad Shafique

By leveraging this analysis, we propose a methodology to explore different on-chip memory designs and a power-gating technique to further reduce the energy consumption, depending upon the utilization across different operations of a CapsuleNet.

Management

RED-Attack: Resource Efficient Decision based Attack for Machine Learning

1 code implementation29 Jan 2019 Faiq Khalid, Hassan Ali, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image.

BIG-bench Machine Learning General Classification +1

CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks

no code implementations28 Jan 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

Capsule Networks preserve the hierarchical spatial relationships between objects, and thereby bears a potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification.

Image Classification Traffic Sign Recognition

Security for Machine Learning-based Systems: Attacks and Challenges during Training and Inference

no code implementations5 Nov 2018 Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Muhammad Shafique

Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data.

BIG-bench Machine Learning Traffic Sign Recognition

SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters

1 code implementation4 Nov 2018 Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image.

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

1 code implementation4 Nov 2018 Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs).

Quantization

CapsAcc: An Efficient Hardware Accelerator for CapsuleNets with Data Reuse

no code implementations2 Nov 2018 Alberto Marchisio, Muhammad Abdullah Hanif, Muhammad Shafique

In this paper, we propose CapsAcc, the first specialized CMOS-based hardware architecture to perform CapsuleNets inference with high performance and energy efficiency.

Distributed, Parallel, and Cluster Computing Hardware Architecture

TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks

no code implementations2 Nov 2018 Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference or can be identified during the validation phase.

Autonomous Driving Data Poisoning +4

MPNA: A Massively-Parallel Neural Array Accelerator with Dataflow Optimization for Convolutional Neural Networks

no code implementations30 Oct 2018 Muhammad Abdullah Hanif, Rachmad Vidya Wicaksana Putra, Muhammad Tanvir, Rehan Hafiz, Semeen Rehman, Muhammad Shafique

The state-of-the-art accelerators for Convolutional Neural Networks (CNNs) typically focus on accelerating only the convolutional layers, but do not prioritize the fully-connected layers much.

Cannot find the paper you are looking for? You can Submit a new open access paper.