Search Results for author: Xiaolu Hou

Found 4 papers, 1 papers with code

A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks

no code implementations25 Mar 2023 Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin

We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden.

Model extraction Side Channel Analysis

FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

1 code implementation23 Sep 2021 Jakub Breier, Xiaolu Hou, Martín Ochoa, Jesus Solano

In particular, we discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications.

Backdoor Attack Image Classification

SNIFF: Reverse Engineering of Neural Networks with Fault Attacks

no code implementations23 Feb 2020 Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin, Yang Liu

In this paper we explore the possibility to reverse engineer neural networks with the usage of fault attacks.

DeepLaser: Practical Fault Attack on Deep Neural Networks

no code implementations15 Jun 2018 Jakub Breier, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, Yang Liu

As deep learning systems are widely adopted in safety- and security-critical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences.

Autonomous Vehicles

Cannot find the paper you are looking for? You can Submit a new open access paper.