A Stealthy Hardware Trojan Exploiting the Architectural Vulnerability of Deep Learning Architectures: Input Interception Attack (IIA)

2 Nov 2019  ·  Tolulope A. Odetola, Hawzhin Raoof Mohammed, Syed Rafay Hasan ·

Deep learning architectures (DLA) have shown impressive performance in computer vision, natural language processing and so on. Many DLA make use of cloud computing to achieve classification due to the high computation and memory requirements. Privacy and latency concerns resulting from cloud computing has inspired the deployment of DLA on embedded hardware accelerators. To achieve short time-to-market and have access to global experts, state-of-the-art techniques of DLA deployment on hardware accelerators are outsourced to untrusted third parties. This outsourcing raises security concerns as hardware Trojans can be inserted into the hardware design of the mapped DLA of the hardware accelerator. We argue that existing hardware Trojan attacks highlighted in literature have no qualitative means how definite they are of the triggering of the Trojan. Also, most inserted Trojans show a obvious spike in the number of hardware resources utilized on the accelerator at the time of triggering the Trojan or when the payload is active. In this paper, we introduce a hardware Trojan attack called Input Interception Attack (IIA). In this attack, we make use of the statistical properties of layer-by-layer output to ensure that asides from being stealthy. Our IIA is able to trigger with some measure of definiteness. Moreover, this IIA attack is tested on DLA used to classify MNIST and Cifar-10 data sets. The attacked design utilizes approximately up to 2% more LUTs respectively compared to the un-compromised designs. Finally, this paper discusses potential defensive mechanisms that could be used to combat such hardware Trojans based attack in hardware accelerators for DLA.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods