no code implementations • 31 Aug 2023 • Kevin Hector, Pierre-Alain Moellic, Mathieu Dumont, Jean-Max Dutertre
We focus on embedded deep neural network models on 32-bit microcontrollers, a widespread family of hardware platforms in IoT, and the use of a standard fault injection strategy - Safe Error Attack (SEA) - to perform a model extraction attack with an adversary having a limited access to training data.
no code implementations • 31 Aug 2023 • Clement Gaine, Pierre-Alain Moellic, Olivier Potin, Jean-Max Dutertre
With the large-scale integration and use of neural network models, especially in critical embedded systems, their security assessment to guarantee their reliability is becoming an urgent need.
no code implementations • 25 Apr 2023 • Mathieu Dumont, Kevin Hector, Pierre-Alain Moellic, Jean-Max Dutertre, Simon Pontié
Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms.
no code implementations • 28 Sep 2022 • Kevin Hector, Mathieu Dumont, Pierre-Alain Moellic, Jean-Max Dutertre
Deep neural network models are massively deployed on a wide variety of hardware platforms.
no code implementations • 4 May 2021 • Mathieu Dumont, Pierre-Alain Moellic, Raphael Viera, Jean-Max Dutertre, Rémi Bernhard
For many IoT domains, Machine Learning and more particularly Deep Learning brings very efficient solutions to handle complex data and perform challenging and mostly critical tasks.
no code implementations • 10 Apr 2020 • Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre
The growing interest for adversarial examples, i. e. maliciously modified examples which fool a classifier, has resulted in many defenses intended to detect them, render them inoffensive or make the model more robust against them.
no code implementations • 27 Sep 2019 • Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre
As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics.