Search Results for author: Bang Wu

Found 5 papers, 3 papers with code

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks

1 code implementation13 Dec 2023 Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan

These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.

Trustworthy Graph Neural Networks: Aspects, Methods and Trends

no code implementations16 May 2022 He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, Jian Pei

Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics.

Drug Discovery Edge-computing +4

Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications

1 code implementation17 Oct 2021 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

We present and implement two types of attacks, i. e., training-based attacks and threshold-based attacks from different adversarial capabilities.

Graph Classification Inference Attack +1

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

1 code implementation24 Oct 2020 Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan

Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client.

Anomaly Detection Model extraction

Defeating Misclassification Attacks Against Transfer Learning

no code implementations29 Aug 2019 Bang Wu, Shuo Wang, Xingliang Yuan, Cong Wang, Carsten Rudolph, Xiangwen Yang

To avoid the bloated ensemble size during inference, we propose a two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively.

Network Pruning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.