Search Results for author: Robert Mullins

Found 25 papers, 6 papers with code

Dynamic Channel Pruning: Feature Boosting and Suppression

2 code implementations ICLR 2019 Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, Cheng-Zhong Xu

Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.

Model Compression Network Pruning

Focused Quantization for Sparse CNNs

1 code implementation NeurIPS 2019 Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu

In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.

Neural Network Compression Quantization

Sponge Examples: Energy-Latency Attacks on Neural Networks

2 code implementations5 Jun 2020 Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson

The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.

Autonomous Vehicles

Revisiting Automated Prompting: Are We Actually Doing Better?

1 code implementation7 Apr 2023 Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, Yarin Gal

Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting.

Few-Shot Learning

Augmentation Backdoors

1 code implementation29 Sep 2022 Joseph Rance, Yiren Zhao, Ilia Shumailov, Robert Mullins

It is well known that backdoors can be inserted into machine learning models through serving a modified dataset to train on.

Data Augmentation

The Taboo Trap: Behavioural Detection of Adversarial Samples

no code implementations18 Nov 2018 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

no code implementations4 Mar 2019 Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins

The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

no code implementations6 Sep 2019 Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson

In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.

reinforcement-learning Reinforcement Learning (RL) +1

Towards Certifiable Adversarial Sample Detection

no code implementations20 Feb 2020 Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson

Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.

Adversarial Robustness

Probabilistic Dual Network Architecture Search on Graphs

no code implementations21 Mar 2020 Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik

We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).

Learned Low Precision Graph Neural Networks

no code implementations19 Sep 2020 Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio

LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.

Nudge Attacks on Point-Cloud DNNs

no code implementations22 Nov 2020 Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson

The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.

Autonomous Driving

Rapid Model Architecture Adaption for Meta-Learning

no code implementations10 Sep 2021 Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins

H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.

Few-Shot Learning

DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

no code implementations31 Oct 2021 Robert Hönig, Yiren Zhao, Robert Mullins

First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses.

Federated Learning Privacy Preserving +1

Model Architecture Adaption for Bayesian Neural Networks

no code implementations9 Feb 2022 Duo Wang, Yiren Zhao, Ilia Shumailov, Robert Mullins

Bayesian Neural Networks (BNNs) offer a mathematically grounded framework to quantify the uncertainty of model predictions but come with a prohibitive computation cost for both training and inference.

Uncertainty Quantification

Efficient Adversarial Training With Data Pruning

no code implementations1 Jul 2022 Maximilian Kaufmann, Yiren Zhao, Ilia Shumailov, Robert Mullins, Nicolas Papernot

In this paper we demonstrate data pruning-a method for increasing adversarial training efficiency through data sub-sampling. We empirically show that data pruning leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation.

ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

no code implementations30 Sep 2022 Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins

These backdoors are impossible to detect during the training or data preparation processes, because they are not yet present.

Dynamic Stashing Quantization for Efficient Transformer Training

no code implementations9 Mar 2023 Guo Yang, Daniel Lo, Robert Mullins, Yiren Zhao

Large Language Models (LLMs) have demonstrated impressive performance on a range of Natural Language Processing (NLP) tasks.

Quantization

Human-Producible Adversarial Examples

no code implementations30 Sep 2023 David Khachaturov, Yue Gao, Ilia Shumailov, Robert Mullins, Ross Anderson, Kassem Fawaz

Visual adversarial examples have so far been restricted to pixel-level image manipulations in the digital world, or have required sophisticated equipment such as 2D or 3D printers to be produced in the physical real world.

Architectural Neural Backdoors from First Principles

no code implementations10 Feb 2024 Harry Langford, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot

In this work we construct an arbitrary trigger detector which can be used to backdoor an architecture with no human supervision.

Cannot find the paper you are looking for? You can Submit a new open access paper.