The Taboo Trap: Behavioural Detection of Adversarial Samples

18 Nov 2018  ·  Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson ·

Deep Neural Networks (DNNs) have become a powerful toolfor a wide range of problems. Yet recent work has found an increasing variety of adversarial samplesthat can fool them. Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured. In this paper, we introduce a novel defence. We train our DNN so that, as long as it is workingas intended on the kind of inputs we expect, its behavior is constrained, in that some set of behaviors are taboo. If it is exposed to adversarial samples, they will often cause a taboo behavior, which we can detect. Taboos can be both subtle and diverse, so their choice can encode and hide information. It is a well-established design principle that the security of a system should not depend on the obscurity of its design, but on some variable (the key) which can differ between implementations and bechanged as necessary. We discuss how taboos can be used to equip a classifier with just such a key, and how to tune the keying mechanism to adversaries of various capabilities. We evaluate the performance of a prototype against a wide range of attacks and show how our simple defense can defend against cheap attacks at scale with zero run-time computation overhead, making it a suitable defense method for IoT devices.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here