Search Results for author: Ilia Shumailov

Found 45 papers, 15 papers with code

When the Curious Abandon Honesty: Federated Learning Is Not Private

1 code implementation6 Dec 2021 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.

Federated Learning Privacy Preserving +1

Bad Characters: Imperceptible NLP Attacks

1 code implementation18 Jun 2021 Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot

In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs.

Machine Translation

Markpainting: Adversarial Machine Learning meets Inpainting

1 code implementation1 Jun 2021 David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross Anderson

Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching.

BIG-bench Machine Learning

Sponge Examples: Energy-Latency Attacks on Neural Networks

2 code implementations5 Jun 2020 Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson

The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.

Autonomous Vehicles

Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?

1 code implementation8 Oct 2023 Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George A. Constantinides, Yiren Zhao

In this work, we explore the statistical and learning properties of the LLM layer and attribute the bottleneck of LLM quantisation to numerical scaling offsets.

Attribute

Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems

1 code implementation18 Apr 2021 Yue Gao, Ilia Shumailov, Kassem Fawaz

As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm.

BIG-bench Machine Learning

On the Limitations of Stochastic Pre-processing Defenses

1 code implementation19 Jun 2022 Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot

An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model.

Adversarial Robustness

Revisiting Automated Prompting: Are We Actually Doing Better?

1 code implementation7 Apr 2023 Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, Yarin Gal

Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting.

Few-Shot Learning

The Curse of Recursion: Training on Generated Data Makes Models Forget

1 code implementation27 May 2023 Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson

It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images.

Descriptive

In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

1 code implementation22 Sep 2022 Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot

When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns.

Augmentation Backdoors

1 code implementation29 Sep 2022 Joseph Rance, Yiren Zhao, Ilia Shumailov, Robert Mullins

It is well known that backdoors can be inserted into machine learning models through serving a modified dataset to train on.

Data Augmentation

Boosting Big Brother: Attacking Search Engines with Encodings

1 code implementation27 Apr 2023 Nicholas Boucher, Luca Pajola, Ilia Shumailov, Ross Anderson, Mauro Conti

Search engines are vulnerable to attacks against indexing and searching via text encoding manipulation.

Chatbot Text Summarization

When Vision Fails: Text Attacks Against ViT and OCR

1 code implementation12 Jun 2023 Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot

While text-based machine learning models that operate on visual inputs of rendered text have become robust against a wide range of existing attacks, we show that they are still vulnerable to visual adversarial examples encoded as text.

Optical Character Recognition (OCR)

The Taboo Trap: Behavioural Detection of Adversarial Samples

no code implementations18 Nov 2018 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.

Towards Automatic Discovery of Cybercrime Supply Chains

no code implementations2 Dec 2018 Rasika Bhalerao, Maxwell Aliapoulios, Ilia Shumailov, Sadia Afroz, Damon McCoy

Our analysis of the automatically generated supply chains demonstrates underlying connections between products and services within these forums.

Hearing your touch: A new acoustic side channel on smartphones

no code implementations26 Mar 2019 Ilia Shumailov, Laurent Simon, Jeff Yan, Ross Anderson

We found the device's microphone(s) can recover this wave and "hear" the finger's touch, and the wave's distortions are characteristic of the tap's location on the screen.

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

no code implementations6 Sep 2019 Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson

In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.

reinforcement-learning Reinforcement Learning (RL) +1

Towards Certifiable Adversarial Sample Detection

no code implementations20 Feb 2020 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.

Adversarial Robustness

On Attribution of Deepfakes

no code implementations20 Aug 2020 Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot

We discuss the ethical implications of our work, identify where our technique can be used, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling.

Attribute DeepFake Detection +3

Nudge Attacks on Point-Cloud DNNs

no code implementations22 Nov 2020 Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson

The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.

Autonomous Driving

Rapid Model Architecture Adaption for Meta-Learning

no code implementations10 Sep 2021 Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins

H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.

Few-Shot Learning

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning

no code implementations22 Oct 2021 Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot

Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.

Machine Unlearning

Model Architecture Adaption for Bayesian Neural Networks

no code implementations9 Feb 2022 Duo Wang, Yiren Zhao, Ilia Shumailov, Robert Mullins

Bayesian Neural Networks (BNNs) offer a mathematically grounded framework to quantify the uncertainty of model predictions but come with a prohibitive computation cost for both training and inference.

Uncertainty Quantification

Efficient Adversarial Training With Data Pruning

no code implementations1 Jul 2022 Maximilian Kaufmann, Yiren Zhao, Ilia Shumailov, Robert Mullins, Nicolas Papernot

In this paper we demonstrate data pruning-a method for increasing adversarial training efficiency through data sub-sampling. We empirically show that data pruning leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation.

DARTFormer: Finding The Best Type Of Attention

no code implementations2 Oct 2022 Jason Ross Brown, Yiren Zhao, Ilia Shumailov, Robert D Mullins

Given the wide and ever growing range of different efficient Transformer attention mechanisms, it is important to identify which attention is most effective when given a task.

ListOps Neural Architecture Search +3

Wide Attention Is The Way Forward For Transformers?

no code implementations2 Oct 2022 Jason Ross Brown, Yiren Zhao, Ilia Shumailov, Robert D Mullins

We demonstrate that wide single layer Transformer models can compete with or outperform deeper ones in a variety of Natural Language Processing (NLP) tasks when both are trained from scratch.

text-classification Text Classification

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

no code implementations30 Sep 2022 Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins

These backdoors are impossible to detect during the training or data preparation processes, because they are not yet present.

Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation

no code implementations9 Jan 2023 Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot

FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.

Federated Learning

Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks

no code implementations24 Jun 2023 Pranav Dahiya, Ilia Shumailov, Ross Anderson

As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it.

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

no code implementations1 Jul 2023 Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot

Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints (when trained on common benchmarks) than the current data-independent guarantee.

LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?

no code implementations20 Jul 2023 David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan

Specifically, we demonstrate that semantic censorship can be perceived as an undecidable problem, highlighting the inherent challenges in censorship that arise due to LLMs' programmatic and instruction-following capabilities.

Computer Security Instruction Following

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

no code implementations23 Aug 2023 Yue Gao, Ilia Shumailov, Kassem Fawaz

Machine Learning (ML) systems are vulnerable to adversarial examples, particularly those from query-based black-box attacks.

Attribute

Human-Producible Adversarial Examples

no code implementations30 Sep 2023 David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz

Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.

Beyond Labeling Oracles: What does it mean to steal ML models?

no code implementations3 Oct 2023 Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot

We discover that prior knowledge of the attacker, i. e. access to in-distribution data, dominates other factors like the attack policy the adversary follows to choose which queries to make to the victim model API.

Model extraction

Buffer Overflow in Mixture of Experts

no code implementations8 Feb 2024 Jamie Hayes, Ilia Shumailov, Itay Yona

Mixture of Experts (MoE) has become a key ingredient for scaling large foundation models while keeping inference costs steady.

Architectural Neural Backdoors from First Principles

no code implementations10 Feb 2024 Harry Langford, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot

In this work we construct an arbitrary trigger detector which can be used to backdoor an architecture with no human supervision.

Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias

no code implementations12 Mar 2024 Sierra Wyllie, Ilia Shumailov, Nicolas Papernot

We simulate AR interventions by curating representative training batches for stochastic gradient descent to demonstrate how AR can improve upon the unfairnesses of models and data ecosystems subject to other MIDS.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.