Machine learning models can leak information regarding the dataset they have trained.
To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.
In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.