no code implementations • 20 Sep 2023 • Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer
This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE.
1 code implementation • 8 Feb 2023 • Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen
In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.
no code implementations • 30 Jan 2023 • He Zhang, Xingliang Yuan, Quoc Viet Hung Nguyen, Shirui Pan
Existing studies have respectively explored the fairness and privacy of GNNs and exhibited that both fairness and privacy are at the cost of GNN performance.
no code implementations • 16 May 2022 • He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, Jian Pei
Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics.
no code implementations • 14 Mar 2022 • Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, Bo Li
Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process.
no code implementations • 25 Feb 2022 • He Zhang, Xingliang Yuan, Chuan Zhou, Shirui Pan
By projecting the strategy, our method dramatically minimizes the cost of learning a new attack strategy when the attack budget changes.
no code implementations • 4 Feb 2022 • Yifeng Zheng, Shangqi Lai, Yi Liu, Xingliang Yuan, Xun Yi, Cong Wang
In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation.
1 code implementation • 17 Oct 2021 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.
no code implementations • 8 Dec 2020 • Yi Liu, Xingliang Yuan, Ruihui Zhao, Cong Wang, Dusit Niyato, Yefeng Zheng
Extensive case studies have shown that our attacks are effective on different datasets and common semi-supervised learning methods.
1 code implementation • 24 Oct 2020 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.
no code implementations • 4 Jun 2020 • Yi Liu, Xingliang Yuan, Zehui Xiong, Jiawen Kang, Xiaofei Wang, Dusit Niyato
As the 5G communication networks are being widely deployed worldwide, both industry and academia have started to move beyond 5G and explore 6G communications.
no code implementations • 14 Nov 2019 • Shangqi Lai, Xingliang Yuan, Amin Sakzad, Mahsa Salehi, Joseph K. Liu, Dongxi Liu
It realises several cryptographic modules via efficient and interchangeable protocols to support the above cryptographic operations and composes them in the overall protocol to enable outlier detection over encrypted datasets.
no code implementations • 29 Aug 2019 • Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang
To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.