no code implementations • 26 Jun 2024 • Vincent Guan, Florent Guépin, Ana-Maria Cretu, Yves-Alexandre de Montjoye
To measure the risk of an MIA performed by a realistic adversary, we develop the first Zero Auxiliary Knowledge (ZK) MIA on aggregate location data, which eliminates the need for an auxiliary dataset of real individual traces.
no code implementations • 24 May 2024 • Florent Guépin, Nataša Krčo, Matthieu Meeus, Yves-Alexandre de Montjoye
Taken together, our results show that current MIA evaluation is averaging the risk across datasets leading to inaccurate risk estimates, and the risk posed by attacks leveraging information about the target dataset to be potentially underestimated.
no code implementations • 4 Jul 2023 • Florent Guépin, Matthieu Meeus, Ana-Maria Cretu, Yves-Alexandre de Montjoye
While membership inference attacks (MIAs), based on shadow modeling, have become the standard to evaluate the privacy of synthetic data, they currently assume the attacker to have access to an auxiliary dataset sampled from a similar distribution as the training dataset.
no code implementations • 17 Jun 2023 • Matthieu Meeus, Florent Guépin, Ana-Maria Cretu, Yves-Alexandre de Montjoye
The choice of vulnerable records is as important as more accurate MIAs when evaluating the privacy of synthetic data releases, including from a legal perspective.
1 code implementation • 16 Dec 2021 • Ana-Maria Creţu, Florent Guépin, Yves-Alexandre de Montjoye
Despite machine learning models being widely used today, the relationship between a model and its training dataset is not well understood.