Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation

5 Mar 2023  ·  Yixin Liu, Chenrui Fan, Pan Zhou, Lichao Sun ·

While the use of graph-structured data in various fields is becoming increasingly popular, it also raises concerns about the potential unauthorized exploitation of personal data for training commercial graph neural network (GNN) models, which can compromise privacy. To address this issue, we propose a novel method for generating unlearnable graph examples. By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable. Notably, by modifying only $5\%$ at most of the potential edges in the graph data, our method successfully decreases the accuracy from ${77.33\%}$ to ${42.47\%}$ on the COLLAB dataset.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods