Membership Inference Attack in Face of Data Transformations

29 Sep 2021  ·  Jiyu Chen, Yiwen Guo, Hao Chen ·

Membership inference attacks (MIAs) on machine learning models, which try to infer whether an example is in the training dataset of a target model, are widely studied in recent years as data privacy attracts increasing attention. One unignorable problem in the traditional MIA threat model is that it assumes the attacker can obtain exactly the same example as in the training dataset. In reality, however, the attacker is more likely to collect only a transformed version of the original example. For instance, the attacker may download a down-scaled image from a website, while the smaller image has the same content as the original image used for model training. Generally, after transformations that would not affect its semantics, a transformed training member should still be treated the same as the original one regarding privacy leakage. In this paper, we propose extending the concept of MIAs into more realistic scenarios by considering data transformations and derive two MIAs for transformed examples: one follows the efficient loss-thresholding ideas, and the other tries to approximately reverse the transformations. We demonstrated the effectiveness of our attacks by extensive evaluations on multiple common data transformations and comparison with other state-of-the-art attacks. Moreover, we also studied the coverage difference between our two attacks to show their limitations and advantages.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here