no code implementations • 25 Mar 2023 • Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin
We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden.
1 code implementation • 23 Sep 2021 • Jakub Breier, Xiaolu Hou, Martín Ochoa, Jesus Solano
In particular, we discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications.
no code implementations • 23 Feb 2020 • Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin, Yang Liu
In this paper we explore the possibility to reverse engineer neural networks with the usage of fault attacks.
no code implementations • 15 Jun 2018 • Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, Yang Liu
As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences.