Search Results for author: Hossein Yalame

Found 5 papers, 1 papers with code

Comments on "Privacy-Enhanced Federated Learning Against Poisoning Adversaries"

no code implementations30 Sep 2024 Thomas Schneider, Ajith Suresh, Hossein Yalame

al., IEEE TIFS'23), several subsequent papers continued to reference Liu et al. (IEEE TIFS'21) as a potential solution for private federated learning.

Federated Learning

Attesting Distributional Properties of Training Data for Machine Learning

1 code implementation18 Aug 2023 Vasisht Duddu, Anudeep Das, Nora Khayata, Hossein Yalame, Thomas Schneider, N. Asokan

The success of machine learning (ML) has been accompanied by increased concerns about its trustworthiness.

Diversity

WW-FL: Secure and Private Large-Scale Federated Learning

no code implementations20 Feb 2023 Felix Marx, Thomas Schneider, Ajith Suresh, Tobias Wehrle, Christian Weinert, Hossein Yalame

Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.

Data Poisoning Federated Learning +1

ScionFL: Efficient and Robust Secure Quantized Aggregation

no code implementations13 Oct 2022 Yaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider, Ajith Suresh, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, Avishay Yanai

In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.

Federated Learning Quantization

BAFFLE: TOWARDS RESOLVING FEDERATED LEARNING’S DILEMMA - THWARTING BACKDOOR AND INFERENCE ATTACKS

no code implementations1 Jan 2021 Thien Duc Nguyen, Phillip Rieger, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Ahmad-Reza Sadeghi, Thomas Schneider, Shaza Zeitouni

Recently, federated learning (FL) has been subject to both security and privacy attacks posing a dilemmatic challenge on the underlying algorithmic designs: On the one hand, FL is shown to be vulnerable to backdoor attacks that stealthily manipulate the global model output using malicious model updates, and on the other hand, FL is shown vulnerable to inference attacks by a malicious aggregator inferring information about clients’ data from their model updates.

Federated Learning Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.