Search Results for author: Jiayuan Ye

Found 6 papers, 3 papers with code

Leave-one-out Distinguishability in Machine Learning

1 code implementation29 Sep 2023 Jiayuan Ye, Anastasia Borovykh, Soufiane Hayou, Reza Shokri

We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD).

Gaussian Processes Memorization

Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning

1 code implementation11 Sep 2023 Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri

Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.

Federated Learning Image Classification +1

Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)

no code implementations10 Mar 2022 Jiayuan Ye, Reza Shokri

We prove that, in these settings, our privacy bound converges exponentially fast and is substantially smaller than the composition bounds, notably after a few number of training epochs.

Privacy Auditing of Machine Learning using Membership Inference Attacks

no code implementations29 Sep 2021 Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Reza Shokri

In this paper, we present a framework that can explain the implicit assumptions and also the simplifications made in the prior work.

BIG-bench Machine Learning

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

no code implementations NeurIPS 2021 Rishav Chourasia, Jiayuan Ye, Reza Shokri

What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}?

Cannot find the paper you are looking for? You can Submit a new open access paper.