no code implementations • 18 Dec 2024 • Xinxin Liu, Aaron Thomas, Cheng Zhang, Jianyi Cheng, Yiren Zhao, Xitong Gao
Parameter-Efficient Fine-Tuning (PEFT) has gained prominence through low-rank adaptation methods like LoRA.
no code implementations • 21 Jun 2024 • Zixi Zhang, Cheng Zhang, Xitong Gao, Robert D. Mullins, George A. Constantinides, Yiren Zhao
We present HeteroLoRA, a light-weight search algorithm that leverages zero-cost proxies to allocate the limited LoRA trainable parameters across the model for better fine-tuned performance.
no code implementations • 21 Jun 2024 • Yuang Chen, Cheng Zhang, Xitong Gao, Robert D. Mullins, George A. Constantinides, Yiren Zhao
In this work, we propose AsymGQA, an activation-informed approach to asymmetrically grouping an MHA to a GQA for better model performance.
no code implementations • 20 May 2024 • Jiayan Chen, Zhirong Qian, Tianhui Meng, Xitong Gao, Tian Wang, Weijia Jia
Aiming at privacy preservation, Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
1 code implementation • 7 Aug 2023 • Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, Cheng-Zhong Xu
To further evaluate the attack and defense capabilities of these poisoning methods, we have developed a benchmark -- APBench for assessing the efficacy of adversarial poisoning.
1 code implementation • 27 Mar 2023 • Tianrui Qin, Xitong Gao, Juanjuan Zhao, Kejiang Ye, Cheng-Zhong Xu
In this paper, we introduce the UEraser method, which outperforms current defenses against different types of state-of-the-art unlearnable example attacks through a combination of effective data augmentation policies and loss-maximizing adversarial augmentations.
no code implementations • ICCV 2023 • Xinquan Chen, Xitong Gao, Juanjuan Zhao, Kejiang Ye, Cheng-Zhong Xu
It can generate UAEs from scratch or conditionally based on reference images.
1 code implementation • 20 Dec 2022 • Tianrui Qin, Xianghuan He, Xitong Gao, Yiren Zhao, Kejiang Ye, Cheng-Zhong Xu
Open software supply chain attacks, once successful, can exact heavy costs in mission-critical applications.
1 code implementation • 15 Nov 2022 • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
In particular, most ensemble defenses exhibit near or exactly 0% robustness against MORA with $\ell^\infty$ perturbation within 0. 02 on CIFAR-10, and 0. 01 on CIFAR-100.
no code implementations • 5 Oct 2022 • Yiren Zhao, Oluwatomisin Dada, Xitong Gao, Robert D Mullins
Large neural networks are often overparameterised and prone to overfitting, Dropout is a widely used regularization technique to combat overfitting and improve model generalization.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 29 Sep 2021 • Dongping Liao, Xitong Gao, Yiren Zhao, Hao Dai, Li Li, Kafeng Wang, Kejiang Ye, Yang Wang, Cheng-Zhong Xu
Federated learning (FL) enables edge clients to train collaboratively while preserving individual's data privacy.
no code implementations • 10 Sep 2021 • Yiren Zhao, Xitong Gao, Ilia Shumailov, Nicolo Fusi, Robert Mullins
H-Meta-NAS shows a Pareto dominance compared to a variety of NAS and manual baselines in popular few-shot learning benchmarks with various hardware platforms and constraints.
1 code implementation • CVPR 2021 • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
In this paper, we show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks.
no code implementations • ICLR 2020 • Kafeng Wang, Xitong Gao, Yiren Zhao, Xingjian Li, Dejing Dou, Cheng-Zhong Xu
Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance.
no code implementations • 21 Mar 2020 • Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, Mateja Jamnik
We present the first differentiable Network Architecture Search (NAS) for Graph Neural Networks (GNNs).
no code implementations • 21 Oct 2019 • Yiren Zhao, Xitong Gao, Xuan Guo, Junyi Liu, Erwei Wang, Robert Mullins, Peter Y. K. Cheung, George Constantinides, Cheng-Zhong Xu
Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs.
no code implementations • 6 Sep 2019 • Yiren Zhao, Ilia Shumailov, Han Cui, Xitong Gao, Robert Mullins, Ross Anderson
In this work, we show how such samples can be generalised from White-box and Grey-box attacks to a strong Black-box case, where the attacker has no knowledge of the agents, their training parameters and their training methods.
1 code implementation • NeurIPS 2019 • Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu
In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.
no code implementations • 23 Jan 2019 • Ilia Shumailov, Xitong Gao, Yiren Zhao, Robert Mullins, Ross Anderson, Cheng-Zhong Xu
Convolutional Neural Networks (CNNs) are widely used to solve classification tasks in computer vision.
2 code implementations • ICLR 2019 • Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, Cheng-Zhong Xu
Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.