Search Results for author: Casey Meehan

Found 9 papers, 3 papers with code

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

no code implementations1 Jul 2023 Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot

Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints (when trained on common benchmarks) than the current data-independent guarantee.

Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning

1 code implementation NeurIPS 2023 Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, Chuan Guo

Self-supervised learning (SSL) algorithms can produce useful image representations by learning to associate different parts of natural images with one another.

Memorization Self-Supervised Learning

Privacy Amplification by Subsampling in Time Domain

no code implementations13 Jan 2022 Tatsuki Koga, Casey Meehan, Kamalika Chaudhuri

When this is the case, we observe that the influence of a single participant (sensitivity) can be reduced by subsampling and/or filtering in time, while still meeting privacy requirements.

Time Series Time Series Analysis

Privacy Implications of Shuffling

no code implementations ICLR 2022 Casey Meehan, Amrita Roy Chowdhury, Kamalika Chaudhuri, Somesh Jha

\ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the \textit{order} of the data.

A Shuffling Framework for Local Differential Privacy

no code implementations11 Jun 2021 Casey Meehan, Amrita Roy Chowdhury, Kamalika Chaudhuri, Somesh Jha

ldp deployments are vulnerable to inference attacks as an adversary can link the noisy responses to their identity and subsequently, auxiliary information using the order of the data.

Location Trace Privacy Under Conditional Priors

1 code implementation23 Feb 2021 Casey Meehan, Kamalika Chaudhuri

Providing meaningful privacy to users of location based services is particularly challenging when multiple locations are revealed in a short period of time.

A Non-Parametric Test to Detect Data-Copying in Generative Models

1 code implementation12 Apr 2020 Casey Meehan, Kamalika Chaudhuri, Sanjoy Dasgupta

Detecting overfitting in generative models is an important challenge in machine learning.

BIG-bench Machine Learning

Location Trace Privacy Under Conditional Priors

no code implementations9 Dec 2019 Casey Meehan, Kamalika Chaudhuri

Providing meaningful privacy to users of location based services is particularly challenging when multiple locations are revealed in a short period of time.

Cannot find the paper you are looking for? You can Submit a new open access paper.