Search Results for author: Jean-Max Dutertre

Found 7 papers, 0 papers with code

Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models

no code implementations31 Aug 2023 Kevin Hector, Pierre-Alain Moellic, Mathieu Dumont, Jean-Max Dutertre

We focus on embedded deep neural network models on 32-bit microcontrollers, a widespread family of hardware platforms in IoT, and the use of a standard fault injection strategy - Safe Error Attack (SEA) - to perform a model extraction attack with an adversary having a limited access to training data.

Model extraction

Fault Injection on Embedded Neural Networks: Impact of a Single Instruction Skip

no code implementations31 Aug 2023 Clement Gaine, Pierre-Alain Moellic, Olivier Potin, Jean-Max Dutertre

With the large-scale integration and use of neural network models, especially in critical embedded systems, their security assessment to guarantee their reliability is becoming an urgent need.

Evaluation of Parameter-based Attacks against Embedded Neural Networks with Laser Injection

no code implementations25 Apr 2023 Mathieu Dumont, Kevin Hector, Pierre-Alain Moellic, Jean-Max Dutertre, Simon Pontié

Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms.

A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

no code implementations28 Sep 2022 Kevin Hector, Mathieu Dumont, Pierre-Alain Moellic, Jean-Max Dutertre

Deep neural network models are massively deployed on a wide variety of hardware platforms.

An Overview of Laser Injection against Embedded Neural Network Models

no code implementations4 May 2021 Mathieu Dumont, Pierre-Alain Moellic, Raphael Viera, Jean-Max Dutertre, Rémi Bernhard

For many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks.

BIG-bench Machine Learning

Luring of transferable adversarial perturbations in the black-box paradigm

no code implementations10 Apr 2020 Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre

The growing interest for adversarial examples, i. e. maliciously modified examples which fool a classifier, has resulted in many defenses intended to detect them, render them inoffensive or make the model more robust against them.

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

no code implementations27 Sep 2019 Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre

As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics.

Adversarial Robustness BIG-bench Machine Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.