Search Results for author: Giovanni Cherubin

Found 9 papers, 6 papers with code

Closed-Form Bounds for DP-SGD against Record-level Inference

no code implementations22 Feb 2024 Giovanni Cherubin, Boris Köpf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin

This paper presents a new approach to evaluate the privacy of machine learning models against specific record-level threats, such as membership and attribute inference, without the indirection through DP.

Attribute

Synthetic Data -- what, why and how?

no code implementations6 May 2022 James Jordon, Lukasz Szpruch, Florimond Houssiau, Mirko Bottarelli, Giovanni Cherubin, Carsten Maple, Samuel N. Cohen, Adrian Weller

This explainer document aims to provide an overview of the current state of the rapidly expanding work on synthetic data technologies, with a particular focus on privacy.

Approximating Full Conformal Prediction at Scale via Influence Functions

1 code implementation2 Feb 2022 Javier Abad, Umang Bhatt, Adrian Weller, Giovanni Cherubin

We prove that our method is a consistent approximation of full CP, and empirically show that the approximation error becomes smaller as the training set increases; e. g., for $10^{3}$ training points the two methods output p-values that are $<10^{-3}$ apart: a negligible error for any practical application.

Conformal Prediction

Reconstructing Training Data with Informed Adversaries

2 code implementations13 Jan 2022 Borja Balle, Giovanni Cherubin, Jamie Hayes

Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.

Memorization Reconstruction Attack

F-BLEAU: Fast Black-box Leakage Estimation

1 code implementation4 Feb 2019 Giovanni Cherubin, Konstantinos Chatzikokolakis, Catuscia Palamidessi

The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs.

Cryptography and Security

Bayes, not Naïve: Security Bounds on Website Fingerprinting Defenses

1 code implementation24 Feb 2017 Giovanni Cherubin

In this paper, we present a practical method to derive security bounds for any WF defense, which depend on a chosen feature set.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.