Search Results for author: Faiq Khalid

Found 15 papers, 3 papers with code

Security Analysis of Capsule Network Inference using Horizontal Collaboration

no code implementations22 Sep 2021 Adewale Adeyemo, Faiq Khalid, Tolulope A. Odetola, Syed Rafay Hasan

Similar to traditional CNNs, CapsNet is also vulnerable to several malicious attacks, as studied by several researchers in the literature.

Collaborative Inference Self-Driving Cars

FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack

no code implementations13 Jun 2021 Tolulope Odetola, Faiq Khalid, Travis Sandefur, Hawzhin Mohammed, Syed Rafay Hasan

Since in horizontal collaboration of RC AIoT devices different sections of CNN architectures are outsourced to different untrusted third parties, the attacker may not know the input image, but it has access to the layer-by-layer output feature maps information for the assigned sections of the CNN architecture.

Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks

no code implementations5 May 2021 Faiq Khalid, Muhammad Abdullah Hanif, Muhammad Shafique

From tiny pacemaker chips to aircraft collision avoidance systems, the state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely on Deep Neural Networks (DNNs).

Collision Avoidance

MacLeR: Machine Learning-based Run-Time Hardware Trojan Detection in Resource-Constrained IoT Edge Devices

no code implementations21 Nov 2020 Faiq Khalid, Syed Rafay Hasan, Sara Zia, Osman Hasan, Falah Awwad, Muhammad Shafique

To reduce the overhead of data acquisition, we propose a single power-port current acquisition block using current sensors in time-division multiplexing, which increases accuracy while incurring reduced area overhead.

BIG-bench Machine Learning

FANNet: Formal Analysis of Noise Tolerance, Training Bias and Input Sensitivity in Neural Networks

no code implementations3 Dec 2019 Mahum Naseer, Mishal Fatima Minhas, Faiq Khalid, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique

With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems.

General Classification

Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

no code implementations4 Feb 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w. r. t.

Data Poisoning

RED-Attack: Resource Efficient Decision based Attack for Machine Learning

1 code implementation29 Jan 2019 Faiq Khalid, Hassan Ali, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image.

BIG-bench Machine Learning General Classification +1

CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks

no code implementations28 Jan 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

Capsule Networks preserve the hierarchical spatial relationships between objects, and thereby bears a potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification.

Image Classification Traffic Sign Recognition

Security for Machine Learning-based Systems: Attacks and Challenges during Training and Inference

no code implementations5 Nov 2018 Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Muhammad Shafique

Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data.

BIG-bench Machine Learning Traffic Sign Recognition

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks

1 code implementation4 Nov 2018 Faiq Khalid, Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs).

Quantization

SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters

1 code implementation4 Nov 2018 Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

In this paper, we introduce a novel technique based on the Secure Selective Convolutional (SSC) techniques in the training loop that increases the robustness of a given DNN by allowing it to learn the data distribution based on the important edges in the input image.

SIMCom: Statistical Sniffing of Inter-Module Communications for Run-time Hardware Trojan Detection

no code implementations4 Nov 2018 Faiq Khalid, Syed Rafay Hasan, Osman Hasan, Muhammad Shafique

We present a run-time methodology for HT detection that employs a multi-parameter statistical traffic modeling of the communication channel in a given System-on-Chip (SoC), named as SIMCom.

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

no code implementations4 Nov 2018 Faiq Khalid, Muhammmad Abdullah Hanif, Semeen Rehman, Junaid Qadir, Muhammad Shafique

Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large datasets.

Adversarial Attack BIG-bench Machine Learning +1

TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks

no code implementations2 Nov 2018 Faiq Khalid, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference or can be identified during the validation phase.

Autonomous Driving Data Poisoning +4

A Roadmap Towards Resilient Internet of Things for Cyber-Physical Systems

no code implementations16 Oct 2018 Denise Ratasich, Faiq Khalid, Florian Geissler, Radu Grosu, Muhammad Shafique, Ezio Bartocci

Furthermore, this paper presents the main challenges in building a resilient IoT for CPS which is crucial in the era of smart CPS with enhanced connectivity (an excellent example of such a system is connected autonomous vehicles).

Anomaly Detection Autonomous Vehicles

Cannot find the paper you are looking for? You can Submit a new open access paper.