Search Results for author: Xiangwen Yang

Found 4 papers, 3 papers with code

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks

1 code implementation13 Dec 2023 Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan

These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

1 code implementation17 Oct 2021 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.

Graph Classification Inference Attack +1

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

1 code implementation24 Oct 2020 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.

Anomaly Detection Model extraction

Defeating Misclassification Attacks Against Transfer Learning

no code implementations29 Aug 2019 Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang

To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.

Network Pruning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.