Search Results for author: Xitong Gao

Found 18 papers, 8 papers with code

APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses

1 code implementation7 Aug 2023 Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, Cheng-Zhong Xu

To further evaluate the attack and defense capabilities of these poisoning methods, we have developed a benchmark -- APBench for assessing the efficacy of adversarial poisoning.

Data Augmentation

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks

1 code implementation27 Mar 2023 Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, Cheng-Zhong Xu

In this paper, we introduce the UEraser method, which outperforms current defenses against different types of state-of-the-art unlearnable example attacks through a combination of effective data augmentation policies and loss-maximizing adversarial augmentations.

Data Augmentation Data Poisoning

Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation

1 code implementation20 Dec 2022 Tianrui Qin, Xianghuan He, Xitong Gao, Yiren Zhao, Kejiang Ye, Cheng-Zhong Xu

Open software supply chain attacks, once successful, can exact heavy costs in mission-critical applications.

Data Augmentation

MORA: Improving Ensemble Robustness Evaluation with Model-Reweighing Attack

1 code implementation15 Nov 2022 Yunrui Yu, Xitong Gao, Cheng-Zhong Xu

In particular, most ensemble defenses exhibit near or exactly 0% robustness against MORA with $\ell^\infty$ perturbation within 0. 02 on CIFAR-10, and 0. 01 on CIFAR-100.

Adversarial Attack

Revisiting Structured Dropout

no code implementations5 Oct 2022 Yiren Zhao, Oluwatomisin Dada, Xitong Gao, Robert D Mullins

Large neural networks are often overparameterised and prone to overfitting, Dropout is a widely used regularization technique to combat overfitting and improve model generalization.

Scheduling

FedDrop: Trajectory-weighted Dropout for Efficient Federated Learning

no code implementations29 Sep 2021 Dongping Liao, Xitong Gao, Yiren Zhao, Hao Dai, Li Li, Kafeng Wang, Kejiang Ye, Yang Wang, Cheng-Zhong Xu

Federated learning (FL) enables edge clients to train collaboratively while preserving individual's data privacy.

Federated Learning

Rapid Model Architecture Adaption for Meta-Learning

no code implementations10 Sep 2021 Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins

H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.

Few-Shot Learning

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

1 code implementation CVPR 2021 Yunrui Yu, Xitong Gao, Cheng-Zhong Xu

In this paper, we show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks.

Pay Attention to Features, Transfer Learn Faster CNNs

no code implementations ICLR 2020 Kafeng Wang, Xitong Gao, Yiren Zhao, Xingjian Li, Dejing Dou, Cheng-Zhong Xu

Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance.

Transfer Learning

Probabilistic Dual Network Architecture Search on Graphs

no code implementations21 Mar 2020 Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik

We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).

Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

no code implementations6 Sep 2019 Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson

In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.

reinforcement-learning Reinforcement Learning (RL) +1

Focused Quantization for Sparse CNNs

1 code implementation NeurIPS 2019 Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu

In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.

Neural Network Compression Quantization

Dynamic Channel Pruning: Feature Boosting and Suppression

2 code implementations ICLR 2019 Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, Cheng-Zhong Xu

Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.

Model Compression Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.