Search Results for author: Alberto Marchisio

Found 35 papers, 10 papers with code

SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking Neural Networks for Autonomous Agents

no code implementations14 Apr 2024 Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique

The experimental results show that our proposed framework can maintain high accuracy (i. e., 84. 12% accuracy) with 68. 75% memory saving, 3. 58x speed-up, and 4. 03x energy efficiency improvement as compared to the state-of-the-art work for NCARS dataset, thereby enabling energy-efficient embodied SNN deployments for autonomous agents.

Quantization

Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack

no code implementations4 Apr 2024 Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Fakhreddine Zayer, Jorge Dias, Muhammad Shafique

Toward this, recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics through bio-plausible computing paradigm that mimics how the biological brain works, known as "neuromorphic artificial intelligence (AI)".

A Methodology to Study the Impact of Spiking Neural Network Parameters considering Event-Based Automotive Data

no code implementations4 Apr 2024 Iqra Bano, Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique

Toward this, we propose a novel methodology to systematically study and analyze the impact of SNN parameters considering event-based automotive data, then leverage this analysis for enhancing SNN developments.

Autonomous Driving Image Classification +2

QFNN-FFD: Quantum Federated Neural Network for Financial Fraud Detection

no code implementations3 Apr 2024 Nouhaila Innan, Alberto Marchisio, Muhammad Shafique, Mohamed Bennai

This study introduces the Quantum Federated Neural Network for Financial Fraud Detection (QFNN-FFD), a cutting-edge framework merging Quantum Machine Learning (QML) and quantum computing with Federated Learning (FL) to innovate financial fraud detection.

Federated Learning Fraud Detection +1

FedQNN: Federated Learning using Quantum Neural Networks

no code implementations16 Mar 2024 Nouhaila Innan, Muhammad Al-Zafar Khan, Alberto Marchisio, Muhammad Shafique, Mohamed Bennai

In this study, we explore the innovative domain of Quantum Federated Learning (QFL) as a framework for training Quantum Machine Learning (QML) models via distributed networks.

Federated Learning Quantum Machine Learning

A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural Networks

no code implementations10 Aug 2023 Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina, Muhammad Shafique

Machine learning (ML) is widely used today, especially through deep neural networks (DNNs), however, increasing computational load and resource requirements have led to cloud-based solutions.

Privacy Preserving

SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers

1 code implementation8 Apr 2023 Alberto Marchisio, Davide Dura, Maurizio Capra, Maurizio Martina, Guido Masera, Muhammad Shafique

In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware.

Neural Network Compression Quantization

RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks

no code implementations8 Apr 2023 Alberto Marchisio, Antonio De Marco, Alessio Colucci, Maurizio Martina, Muhammad Shafique

Overall, CapsNets achieve better robustness against adversarial examples and affine transformations, compared to a traditional CNN with a similar number of parameters.

Image Classification

AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

1 code implementation13 Oct 2022 Farzad Nikfam, Alberto Marchisio, Maurizio Martina, Muhammad Shafique

The experiments show comparable results with the related works, and in several experiments, the adversarial training of DNNs using our AccelAT framework is conducted up to 2 times faster than the existing techniques.

Adversarial Attack

LaneSNNs: Spiking Neural Networks for Lane Detection on the Loihi Neuromorphic Processor

no code implementations3 Aug 2022 Alberto Viale, Alberto Marchisio, Maurizio Martina, Guido Masera, Muhammad Shafique

Autonomous Driving (AD) related features represent important elements for the next generation of mobile robots and autonomous vehicles focused on increasingly intelligent, autonomous, and interconnected systems.

Autonomous Driving Lane Detection

CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for Energy-Efficient Low-precision Deep Convolutional Neural Networks

no code implementations31 Jul 2022 Muhammad Abdullah Hanif, Giuseppe Maria Sarda, Alberto Marchisio, Guido Masera, Maurizio Martina, Muhammad Shafique

The high computational complexity of these networks, which translates to increased energy consumption, is the foremost obstacle towards deploying large DNNs in resource-constrained systems.

Quantization

Enabling Capsule Networks at the Edge through Approximate Softmax and Squash Operations

no code implementations21 Jun 2022 Alberto Marchisio, Beatrice Bussolino, Edoardo Salvati, Maurizio Martina, Guido Masera, Muhammad Shafique

In our experiments, we evaluate tradeoffs between area, power consumption, and critical path delay of the designs implemented with the ASIC design flow, and the accuracy of the quantized CapsNets, compared to the exact functions.

fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems

no code implementations27 May 2022 Alberto Marchisio, Giovanni Caramia, Maurizio Martina, Muhammad Shafique

Recently, Deep Neural Networks (DNNs) have achieved remarkable performances in many applications, while several studies have enhanced their vulnerabilities to malicious attacks.

Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework

no code implementations20 Sep 2021 Muhammad Shafique, Alberto Marchisio, Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif

Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation.

Quantization

R-SNN: An Analysis and Design Methodology for Robustifying Spiking Neural Networks against Adversarial Attacks through Noise Filters for Dynamic Vision Sensors

1 code implementation1 Sep 2021 Alberto Marchisio, Giacomo Pira, Maurizio Martina, Guido Masera, Muhammad Shafique

Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS).

DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks

1 code implementation1 Jul 2021 Alberto Marchisio, Giacomo Pira, Maurizio Martina, Guido Masera, Muhammad Shafique

Spiking Neural Networks (SNNs), despite being energy-efficient when implemented on neuromorphic hardware and coupled with event-based Dynamic Vision Sensors (DVS), are vulnerable to security threats, such as adversarial attacks, i. e., small perturbations added to the input for inducing a misclassification.

Adversarial Attack

Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

1 code implementation9 Dec 2020 Rida El-Allami, Alberto Marchisio, Muhammad Shafique, Ihsen Alouani

We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters.

DESCNet: Developing Efficient Scratchpad Memories for Capsule Network Hardware

no code implementations12 Oct 2020 Alberto Marchisio, Vojtech Mrazek, Muhammad Abdullah Hanif, Muhammad Shafique

We analyze the corresponding on-chip memory requirements and leverage it to propose a novel methodology to explore different scratchpad memory designs and their energy/area trade-offs.

Management

NASCaps: A Framework for Neural Architecture Search to Optimize the Accuracy and Hardware Efficiency of Convolutional Capsule Networks

1 code implementation19 Aug 2020 Alberto Marchisio, Andrea Massa, Vojtech Mrazek, Beatrice Bussolino, Maurizio Martina, Muhammad Shafique

Deep Neural Networks (DNNs) have made significant improvements to reach the desired accuracy to be employed in a wide variety of Machine Learning (ML) applications.

Neural Architecture Search

An Efficient Spiking Neural Network for Recognizing Gestures with a DVS Camera on the Loihi Neuromorphic Processor

1 code implementation16 May 2020 Riccardo Massa, Alberto Marchisio, Maurizio Martina, Muhammad Shafique

Towards the conversion from a DNN to an SNN, we perform a comprehensive analysis of such process, specifically designed for Intel Loihi, showing our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding DNN.

Gesture Recognition Image Classification

Q-CapsNets: A Specialized Framework for Quantizing Capsule Networks

no code implementations15 Apr 2020 Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Maurizio Martina, Guido Masera, Muhammad Shafique

Capsule Networks (CapsNets), recently proposed by the Google Brain team, have superior learning capabilities in machine learning tasks, like image classification, compared to the traditional CNNs.

Image Classification Quantization

ReD-CaNe: A Systematic Methodology for Resilience Analysis and Design of Capsule Networks under Approximations

no code implementations2 Dec 2019 Alberto Marchisio, Vojtech Mrazek, Muhammad Abudllah Hanif, Muhammad Shafique

To the best of our knowledge, this is the first proof-of-concept for employing approximations on the specialized CapsNet hardware.

FasTrCaps: An Integrated Framework for Fast yet Accurate Training of Capsule Networks

1 code implementation24 May 2019 Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Muhammad Abdullah Hanif, Maurizio Martina, Guido Masera, Muhammad Shafique

The goal is to reduce the hardware requirements of CapsNets by removing unused/redundant connections and capsules, while keeping high accuracy through tests of different learning rate policies and batch sizes.

Image Classification Object Detection

CapStore: Energy-Efficient Design and Management of the On-Chip Memory for CapsuleNet Inference Accelerators

no code implementations4 Feb 2019 Alberto Marchisio, Muhammad Abdullah Hanif, Mohammad Taghi Teimoori, Muhammad Shafique

By leveraging this analysis, we propose a methodology to explore different on-chip memory designs and a power-gating technique to further reduce the energy consumption, depending upon the utilization across different operations of a CapsuleNet.

Management

Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

no code implementations4 Feb 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w. r. t.

Data Poisoning

CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks

no code implementations28 Jan 2019 Alberto Marchisio, Giorgio Nanfa, Faiq Khalid, Muhammad Abdullah Hanif, Maurizio Martina, Muhammad Shafique

Capsule Networks preserve the hierarchical spatial relationships between objects, and thereby bears a potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification.

Image Classification Traffic Sign Recognition

CapsAcc: An Efficient Hardware Accelerator for CapsuleNets with Data Reuse

no code implementations2 Nov 2018 Alberto Marchisio, Muhammad Abdullah Hanif, Muhammad Shafique

In this paper, we propose CapsAcc, the first specialized CMOS-based hardware architecture to perform CapsuleNets inference with high performance and energy efficiency.

Distributed, Parallel, and Cluster Computing Hardware Architecture

Cannot find the paper you are looking for? You can Submit a new open access paper.