Search Results for author: Helen Möllering

Found 3 papers, 0 papers with code

ScionFL: Efficient and Robust Secure Quantized Aggregation

no code implementations13 Oct 2022 Yaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider, Ajith Suresh, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, Avishay Yanai

In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.

Federated Learning Quantization

BAFFLE: TOWARDS RESOLVING FEDERATED LEARNING’S DILEMMA - THWARTING BACKDOOR AND INFERENCE ATTACKS

no code implementations1 Jan 2021 Thien Duc Nguyen, Phillip Rieger, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Ahmad-Reza Sadeghi, Thomas Schneider, Shaza Zeitouni

Recently, federated learning (FL) has been subject to both security and privacy attacks posing a dilemmatic challenge on the underlying algorithmic designs: On the one hand, FL is shown to be vulnerable to backdoor attacks that stealthily manipulate the global model output using malicious model updates, and on the other hand, FL is shown vulnerable to inference attacks by a malicious aggregator inferring information about clients’ data from their model updates.

Federated Learning Image Classification

BaFFLe: Backdoor detection via Feedback-based Federated Learning

no code implementations4 Nov 2020 Sebastien Andreina, Giorgia Azzurra Marson, Helen Möllering, Ghassan Karame

In this paper, we propose Backdoor detection via Feedback-based Federated Learning (BAFFLE), a novel defense to secure FL against backdoor attacks.

Federated Learning Model Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.