Search Results for author: Xingliang Yuan

Found 14 papers, 4 papers with code

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks

1 code implementation13 Dec 2023 Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan

These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.

RAI4IoE: Responsible AI for Enabling the Internet of Energy

no code implementations20 Sep 2023 Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer

This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE.

Management

Training-free Lexical Backdoor Attacks on Language Models

1 code implementation8 Feb 2023 Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen

In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.

Backdoor Attack Data Poisoning +1

Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks

no code implementations30 Jan 2023 He Zhang, Xingliang Yuan, Shirui Pan

In this paper, we pioneer the exploration of the interaction between the privacy risks of edge leakage and the individual fairness of a GNN.

Fairness

Trustworthy Graph Neural Networks: Aspects, Methods and Trends

no code implementations16 May 2022 He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, Jian Pei

Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics.

Drug Discovery Edge-computing +4

The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining

no code implementations14 Mar 2022 Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, Bo Li

Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process.

Federated Learning Machine Unlearning

Projective Ranking-based GNN Evasion Attacks

no code implementations25 Feb 2022 He Zhang, Xingliang Yuan, Chuan Zhou, Shirui Pan

By projecting the strategy, our method dramatically minimizes the cost of learning a new attack strategy when the attack budget changes.

Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization

no code implementations4 Feb 2022 Yifeng Zheng, Shangqi Lai, Yi Liu, Xingliang Yuan, Xun Yi, Cong Wang

In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation.

Federated Learning

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

1 code implementation17 Oct 2021 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.

Graph Classification Inference Attack +1

Poisoning Semi-supervised Federated Learning via Unlabeled Data: Attacks and Defenses

no code implementations8 Dec 2020 Yi Liu, Xingliang Yuan, Ruihui Zhao, Cong Wang, Dusit Niyato, Yefeng Zheng

Extensive case studies have shown that our attacks are effective on different datasets and common semi-supervised learning methods.

Federated Learning Quantization

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

1 code implementation24 Oct 2020 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.

Anomaly Detection Model extraction

Federated Learning for 6G Communications: Challenges, Methods, and Future Directions

no code implementations4 Jun 2020 Yi Liu, Xingliang Yuan, Zehui Xiong, Jiawen Kang, Xiaofei Wang, Dusit Niyato

As the 5G communication networks are being widely deployed worldwide, both industry and academia have started to move beyond 5G and explore 6G communications.

Federated Learning

Enabling Efficient Privacy-Assured Outlier Detection over Encrypted Incremental Datasets

no code implementations14 Nov 2019 Shangqi Lai, Xingliang Yuan, Amin Sakzad, Mahsa Salehi, Joseph K. Liu, Dongxi Liu

It realises several cryptographic modules via efficient and interchangeable protocols to support the above cryptographic operations and composes them in the overall protocol to enable outlier detection over encrypted datasets.

Anomaly Detection Outlier Detection +1

Defeating Misclassification Attacks Against Transfer Learning

no code implementations29 Aug 2019 Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang

To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.

Network Pruning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.