Search Results for author: Ross Anderson

Found 23 papers, 9 papers with code

Human-Producible Adversarial Examples

no code implementations30 Sep 2023 David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz

Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.

Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks

no code implementations24 Jun 2023 Pranav Dahiya, Ilia Shumailov, Ross Anderson

As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it.

When Vision Fails: Text Attacks Against ViT and OCR

1 code implementation12 Jun 2023 Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot

While text-based machine learning models that operate on visual inputs of rendered text have become robust against a wide range of existing attacks, we show that they are still vulnerable to visual adversarial examples encoded as text.

Optical Character Recognition (OCR)

The Curse of Recursion: Training on Generated Data Makes Models Forget

1 code implementation27 May 2023 Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson

It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images.

Descriptive

Boosting Big Brother: Attacking Search Engines with Encodings

1 code implementation27 Apr 2023 Nicholas Boucher, Luca Pajola, Ilia Shumailov, Ross Anderson, Mauro Conti

Search engines are vulnerable to attacks against indexing and searching via text encoding manipulation.

Chatbot Text Summarization

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

no code implementations30 Sep 2022 Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins

These backdoors are impossible to detect during the training or data preparation processes, because they are not yet present.

Bad Characters: Imperceptible NLP Attacks

1 code implementation18 Jun 2021 Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot

In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs.

Machine Translation

Markpainting: Adversarial Machine Learning meets Inpainting

1 code implementation1 Jun 2021 David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross Anderson

Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching.

BIG-bench Machine Learning

Nudge Attacks on Point-Cloud DNNs

no code implementations22 Nov 2020 Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson

The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.

Autonomous Driving

Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing

1 code implementation NeurIPS 2020 Arthur Delarue, Ross Anderson, Christian Tjandraatmadja

We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem.

Combinatorial Optimization reinforcement-learning +1

The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

no code implementations NeurIPS 2020 Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Patel, Juan Pablo Vielma

We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.

Sponge Examples: Energy-Latency Attacks on Neural Networks

2 code implementations5 Jun 2020 Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson

The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.

Autonomous Vehicles

Towards Certifiable Adversarial Sample Detection

no code implementations20 Feb 2020 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.

Adversarial Robustness

CAQL: Continuous Action Q-Learning

no code implementations ICLR 2020 Moonkyung Ryu, Yin-Lam Chow, Ross Anderson, Christian Tjandraatmadja, Craig Boutilier

Value-based reinforcement learning (RL) methods like Q-learning have shown success in a variety of domains.

Continuous Control Q-Learning +1

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

no code implementations6 Sep 2019 Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson

In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.

reinforcement-learning Reinforcement Learning (RL) +1

Hearing your touch: A new acoustic side channel on smartphones

no code implementations26 Mar 2019 Ilia Shumailov, Laurent Simon, Jeff Yan, Ross Anderson

We found the device's microphone(s) can recover this wave and "hear" the finger's touch, and the wave's distortions are characteristic of the tap's location on the screen.

Strong mixed-integer programming formulations for trained neural networks

no code implementations20 Nov 2018 Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma

We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network.

The Taboo Trap: Behavioural Detection of Adversarial Samples

no code implementations18 Nov 2018 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.

Strong convex relaxations and mixed-integer programming formulations for trained neural networks

1 code implementation5 Nov 2018 Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma

We present strong convex relaxations for high-dimensional piecewise linear functions that correspond to trained neural networks.

Optimization and Control 90C11

Cannot find the paper you are looking for? You can Submit a new open access paper.