no code implementations • 3 Mar 2025 • Maria Drencheva, Ivo Petrov, Maximilian Baader, Dimitar I. Dimitrov, Martin Vechev
Federated learning claims to enable collaborative model training among multiple clients with data privacy by transmitting gradient updates instead of the actual client data.
no code implementations • 14 Dec 2024 • Anton Alexandrov, Veselin Raychev, Dimitar I. Dimitrov, Ce Zhang, Martin Vechev, Kristina Toutanova
We present BgGPT-Gemma-2-27B-Instruct and BgGPT-Gemma-2-9B-Instruct: continually pretrained and fine-tuned versions of Google's Gemma-2 models, specifically optimized for Bulgarian language understanding and generation.
1 code implementation • 24 May 2024 • Ivo Petrov, Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data.
no code implementations • 6 Mar 2024 • Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
In this work, we propose SPEAR, the first algorithm reconstructing whole batches with $b >1$ exactly.
2 code implementations • 5 Jun 2023 • Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev
Malicious server (MS) attacks have enabled the scaling of data stealing in federated learning to large batch sizes and secure aggregation, settings previously considered private.
1 code implementation • 13 Oct 2022 • Nikola Jovanović, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev
To produce a practical certificate, we develop and apply a statistical procedure that computes a finite sample high-confidence upper bound on the unfairness of any downstream classifier trained on FARE embeddings.
1 code implementation • 4 Oct 2022 • Mark Vero, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev
A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.
1 code implementation • 24 Jun 2022 • Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev
On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.
2 code implementations • 17 Feb 2022 • Mislav Balunović, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev
Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning.
2 code implementations • ICLR 2022 • Mislav Balunović, Dimitar I. Dimitrov, Robin Staab, Martin Vechev
We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.
1 code implementation • 1 Sep 2021 • Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev
We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.
no code implementations • ICLR 2022 • Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev
We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).