no code implementations • 30 Sep 2023 • David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz
Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.
no code implementations • 24 Jun 2023 • Pranav Dahiya, Ilia Shumailov, Ross Anderson
As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it.
1 code implementation • 12 Jun 2023 • Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot
While text-based machine learning models that operate on visual inputs of rendered text have become robust against a wide range of existing attacks, we show that they are still vulnerable to visual adversarial examples encoded as text.
1 code implementation • 27 May 2023 • Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson
It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images.
1 code implementation • 27 Apr 2023 • Nicholas Boucher, Luca Pajola, Ilia Shumailov, Ross Anderson, Mauro Conti
Search engines are vulnerable to attacks against indexing and searching via text encoding manipulation.
no code implementations • 30 Sep 2022 • Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins
These backdoors are impossible to detect during the training or data preparation processes, because they are not yet present.
1 code implementation • 18 Jun 2021 • Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot
In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs.
1 code implementation • 1 Jun 2021 • David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross Anderson
Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching.
1 code implementation • NeurIPS 2021 • Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross Anderson
Machine learning is vulnerable to a wide variety of attacks.
no code implementations • 1 Dec 2020 • Almos Zarandy, Ilia Shumailov, Ross Anderson
Voice assistants are now ubiquitous and listen in on our everyday lives.
no code implementations • 22 Nov 2020 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.
1 code implementation • NeurIPS 2020 • Arthur Delarue, Ross Anderson, Christian Tjandraatmadja
We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem.
no code implementations • NeurIPS 2020 • Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Patel, Juan Pablo Vielma
We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.
2 code implementations • 5 Jun 2020 • Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.
no code implementations • 20 Feb 2020 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.
no code implementations • ICLR 2020 • Moonkyung Ryu, Yin-Lam Chow, Ross Anderson, Christian Tjandraatmadja, Craig Boutilier
Value-based reinforcement learning (RL) methods like Q-learning have shown success in a variety of domains.
no code implementations • 6 Sep 2019 • Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson
In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.
no code implementations • 26 Mar 2019 • Ilia Shumailov, Laurent Simon, Jeff Yan, Ross Anderson
We found the device's microphone(s) can recover this wave and "hear" the finger's touch, and the wave's distortions are characteristic of the tap's location on the screen.
no code implementations • 23 Jan 2019 • Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision.
no code implementations • 20 Nov 2018 • Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma
We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network.
no code implementations • 18 Nov 2018 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.
1 code implementation • 5 Nov 2018 • Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma
We present strong convex relaxations for high-dimensional piecewise linear functions that correspond to trained neural networks.
Optimization and Control 90C11
no code implementations • 29 Sep 2018 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
We, therefore, investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs.