1 code implementation • 21 Nov 2022 • Bjarne Pfitzner, Bert Arnrich
Federated learning (FL) is getting increased attention for processing sensitive, distributed datasets common to domains such as healthcare.
2 code implementations • 6 May 2022 • Joceline Ziegler, Bjarne Pfitzner, Heinrich Schulz, Axel Saalbach, Bert Arnrich
We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients.
1 code implementation • 1 Nov 2021 • Jossekin Beilharz, Bjarne Pfitzner, Robert Schmid, Paul Geppert, Bert Arnrich, Andreas Polze
Federated learning allows a group of distributed clients to train a common machine learning model on private data.
1 code implementation • 18 Jun 2019 • Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu
In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.