Membership Inference Attacks on Knowledge Graphs

16 Apr 2021  ·  Yu Wang, Lifu Huang, Philip S. Yu, Lichao Sun ·

Membership inference attacks (MIAs) infer whether a specific data record is used for target model training. MIAs have provoked many discussions in the information security community since they give rise to severe data privacy issues, especially for private and sensitive datasets. Knowledge Graphs (KGs), which describe domain-specific subjects and relationships among them, are valuable and sensitive, such as medical KGs constructed from electronic health records. However, the privacy threat to knowledge graphs is critical but rarely explored. In this paper, we conduct the first empirical evaluation of privacy threats to knowledge graphs triggered by knowledge graph embedding methods (KGEs). We propose three types of membership inference attacks: transfer attacks (TAs), prediction loss-based attacks (PLAs), and prediction correctness-based attacks (PCAs), according to attack difficulty levels. In the experiments, we conduct three inference attacks against four standard KGE methods over three benchmark datasets. In addition, we also propose the attacks against medical KG and financial KG. The results demonstrate that the proposed attack methods can easily explore the privacy leakage of knowledge graphs.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here