Search Results for author: Dimitar I. Dimitrov

Found 9 papers, 7 papers with code

SPEAR:Exact Gradient Inversion of Batches in Federated Learning

no code implementations6 Mar 2024 Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev

In this work, we propose \emph{the first algorithm reconstructing whole batches with $b >1$ exactly}.

Federated Learning

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

1 code implementation5 Jun 2023 Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev

Malicious server (MS) attacks have enabled the scaling of data stealing in federated learning to large batch sizes and secure aggregation, settings previously considered private.

Federated Learning

FARE: Provably Fair Representation Learning with Practical Certificates

1 code implementation13 Oct 2022 Nikola Jovanović, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev

To produce a practical certificate, we develop and apply a statistical procedure that computes a finite sample high-confidence upper bound on the unfairness of any downstream classifier trained on FARE embeddings.

Fairness Representation Learning

TabLeak: Tabular Data Leakage in Federated Learning

1 code implementation4 Oct 2022 Mark Vero, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev

A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.

Federated Learning Reconstruction Attack +1

Data Leakage in Federated Averaging

1 code implementation24 Jun 2022 Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev

On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.

Federated Learning

LAMP: Extracting Text from Gradients with Language Model Priors

2 code implementations17 Feb 2022 Mislav Balunović, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev

Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning.

Federated Learning Language Modelling

Bayesian Framework for Gradient Leakage

2 code implementations ICLR 2022 Mislav Balunović, Dimitar I. Dimitrov, Robin Staab, Martin Vechev

We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.

Federated Learning

Shared Certificates for Neural Network Verification

1 code implementation1 Sep 2021 Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.

Provably Robust Adversarial Examples

no code implementations ICLR 2022 Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev

We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).

Cannot find the paper you are looking for? You can Submit a new open access paper.