no code implementations • 9 Mar 2023 • Guo Yang, Daniel Lo, Robert Mullins, Yiren Zhao
Large Language Models (LLMs) have demonstrated impressive performance on a range of Natural Language Processing (NLP) tasks.
no code implementations • 30 Sep 2022 • Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, Robert Mullins
These defences work by inspecting the training data, the model, or the integrity of the training procedure.
1 code implementation • 29 Sep 2022 • Joseph Rance, Yiren Zhao, Ilia Shumailov, Robert Mullins
It is well known that backdoors can be inserted into machine learning models through serving a modified dataset to train on.
no code implementations • 1 Jul 2022 • Maximilian Kaufmann, Yiren Zhao, Ilia Shumailov, Robert Mullins, Nicolas Papernot
In this paper we demonstrate data pruning-a method for increasing adversarial training efficiency through data sub-sampling. We empirically show that data pruning leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation.
1 code implementation • 15 Jun 2022 • Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert Mullins, Nicolas Papernot
Machine learning is vulnerable to adversarial manipulation.
no code implementations • 9 Feb 2022 • Duo Wang, Yiren Zhao, Ilia Shumailov, Robert Mullins
Bayesian Neural Networks (BNNs) offer a mathematically grounded framework to quantify the uncertainty of model predictions but come with a prohibitive computation cost for both training and inference.
no code implementations • 31 Oct 2021 • Robert Hönig, Yiren Zhao, Robert Mullins
First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses.
no code implementations • 10 Sep 2021 • Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins
H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.
no code implementations • 22 Nov 2020 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat.
no code implementations • 19 Sep 2020 • Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio
LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round.
1 code implementation • 5 Jun 2020 • Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs.
no code implementations • 21 Mar 2020 • Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik
We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).
no code implementations • 20 Feb 2020 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat.
no code implementations • 21 Oct 2019 • Yiren Zhao, Xitong Gao, Xuan Guo, Junyi Liu, Erwei Wang, Robert Mullins, Peter Y. K. Cheung, George Constantinides, Cheng-Zhong Xu
Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs.
no code implementations • 6 Sep 2019 • Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson
In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.
1 code implementation • NeurIPS 2019 • Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu
In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.
no code implementations • 4 Mar 2019 • Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).
no code implementations • 23 Jan 2019 • Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision.
no code implementations • 18 Nov 2018 • Ilia Shumailov, Yiren Zhao, Robert Mullins, Ross Anderson
Most existing detection mechanisms against adversarial attacksimpose significant costs, either by using additional classifiers to spot adversarial samples, or by requiring the DNN to be restructured.
2 code implementations • ICLR 2019 • Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, Cheng-Zhong Xu
Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.
no code implementations • 29 Sep 2018 • Yiren Zhao, Ilia Shumailov, Robert Mullins, Ross Anderson
We, therefore, investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs.