1 code implementation • 31 Mar 2025 • Yujin Huang, Zhi Zhang, Qingchuan Zhao, Xingliang Yuan, Chunyang Chen
On-device deep learning (DL) has rapidly gained adoption in mobile apps, offering the benefits of offline model inference and user privacy preservation over cloud-based approaches.
no code implementations • 20 Feb 2025 • Yi Liu, Cong Wang, Xingliang Yuan
The Web of Things (WoT) enhances interoperability across web-based and ubiquitous computing platforms while complementing existing IoT standards.
no code implementations • 19 Jan 2025 • Jiadong Lou, Xu Yuan, Rui Zhang, Xingliang Yuan, Neil Gong, Nian-Feng Tzeng
Our crafted noises can ensure 1) the noisy prediction vectors of any two adjacent nodes have their similarity level like that of two non-adjacent nodes and 2) the model prediction is unchanged to ensure zero utility loss.
no code implementations • 21 Jul 2024 • Yi Liu, Chengjun Cai, Xiaoli Zhang, Xingliang Yuan, Cong Wang
Our framework features an automated multi-modal jailbreak attack, wherein visual jailbreak prompts are produced by a red team VLM, and textual prompts are generated by a red team LLM guided by a reinforcement learning agent.
no code implementations • 18 Jun 2024 • Yi Liu, Cong Wang, Xingliang Yuan
To fill this gap, we first formally define generalization error and establish its connection to catastrophic forgetting, paving the way for the development of a clean-label data poisoning attack named BadSampler.
no code implementations • 18 Jun 2024 • Viet Vo, Thusitha Dayaratne, Blake Haydon, Xingliang Yuan, Shangqi Lai, Sharif Abuadbba, Hajime Suzuki, Carsten Rudolph
In this context, federated learning (FL)-enabled spectrum sensing technology has garnered wide attention, allowing for the construction of an aggregated ML model without disclosing the private spectrum sensing information of wireless user devices.
no code implementations • 6 Jun 2024 • Shuo Huang, William MacLean, Xiaoxi Kang, Anqi Wu, Lizhen Qu, Qiongkai Xu, Zhuang Li, Xingliang Yuan, Gholamreza Haffari
Increasing concerns about privacy leakage issues in academia and industry arise when employing NLP models from third-party providers to process sensitive texts.
no code implementations • 23 May 2024 • He Zhang, Bang Wu, Xiangwen Yang, Xingliang Yuan, Xiaoning Liu, Xun Yi
Dynamic graph neural networks (DGNNs) have emerged and been widely deployed in various web applications (e. g., Reddit) to serve users (e. g., personalized content delivery) due to their remarkable ability to learn from complex and dynamic user interaction data.
1 code implementation • 13 Dec 2023 • Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan
These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.
no code implementations • 20 Sep 2023 • Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer
This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE.
1 code implementation • 8 Feb 2023 • Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen
In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.
no code implementations • 30 Jan 2023 • He Zhang, Xingliang Yuan, Shirui Pan
In this paper, we pioneer the exploration of the interaction between the privacy risks of edge leakage and the individual fairness of a GNN.
no code implementations • 16 May 2022 • He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, Jian Pei
Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics.
no code implementations • 14 Mar 2022 • Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, Bo Li
Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process.
no code implementations • 25 Feb 2022 • He Zhang, Xingliang Yuan, Chuan Zhou, Shirui Pan
By projecting the strategy, our method dramatically minimizes the cost of learning a new attack strategy when the attack budget changes.
no code implementations • 4 Feb 2022 • Yifeng Zheng, Shangqi Lai, Yi Liu, Xingliang Yuan, Xun Yi, Cong Wang
In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation.
1 code implementation • 17 Oct 2021 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.
no code implementations • 8 Dec 2020 • Yi Liu, Xingliang Yuan, Ruihui Zhao, Cong Wang, Dusit Niyato, Yefeng Zheng
Extensive case studies have shown that our attacks are effective on different datasets and common semi-supervised learning methods.
1 code implementation • 24 Oct 2020 • Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.
no code implementations • 4 Jun 2020 • Yi Liu, Xingliang Yuan, Zehui Xiong, Jiawen Kang, Xiaofei Wang, Dusit Niyato
As the 5G communication networks are being widely deployed worldwide, both industry and academia have started to move beyond 5G and explore 6G communications.
no code implementations • 14 Nov 2019 • Shangqi Lai, Xingliang Yuan, Amin Sakzad, Mahsa Salehi, Joseph K. Liu, Dongxi Liu
It realises several cryptographic modules via efficient and interchangeable protocols to support the above cryptographic operations and composes them in the overall protocol to enable outlier detection over encrypted datasets.
no code implementations • 29 Aug 2019 • Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang
To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.