Federated Learning
1217 papers with code • 12 benchmarks • 11 datasets
Federated Learning is a machine learning approach that allows multiple devices or entities to collaboratively train a shared model without exchanging their data with each other. Instead of sending data to a central server for training, the model is trained locally on each device, and only the model updates are sent to the central server, where they are aggregated to improve the shared model.
This approach allows for privacy-preserving machine learning, as each device keeps its data locally and only shares the information needed to improve the model.
Libraries
Use these libraries to find Federated Learning models and implementationsDatasets
Latest papers
FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation
The Segmentation Anything Model (SAM) serves as a powerful foundation model for visual segmentation and can be adapted for medical image segmentation.
FedHCDR: Federated Cross-Domain Recommendation with Hypergraph Signal Decoupling
Specifically, to address the data heterogeneity across domains, we introduce an approach called hypergraph signal decoupling (HSD) to decouple the user features into domain-exclusive and domain-shared features.
PPS-QMIX: Periodically Parameter Sharing for Accelerating Convergence of Multi-Agent Reinforcement Learning
Agents share Q-value network periodically during the training process.
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients.
Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks
In LF, we randomly flipped the labels of benign data and trained the model on the manipulated data.
Towards Optimal Customized Architecture for Heterogeneous Federated Learning with Contrastive Cloud-Edge Model Decoupling
To address these issues, we propose a novel federated learning framework called FedCMD, a model decoupling tailored to the Cloud-edge supported federated learning that separates deep neural networks into a body for capturing shared representations in Cloud and a personalized head for migrating data heterogeneity.
Analysis of Privacy Leakage in Federated Large Language Models
With the rapid adoption of Federated Learning (FL) as the training and tuning protocol for applications utilizing Large Language Models (LLMs), recent research highlights the need for significant modifications to FL to accommodate the large-scale of LLMs.
Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
Specifically, for each client, we learn a global prompt to extract consensus knowledge among clients, and a local prompt to capture client-specific category characteristics.
On the Convergence of Federated Learning Algorithms without Data Similarity
In this paper, we present a novel and unified framework for analyzing the convergence of federated learning algorithms without the need for data similarity conditions.
Uncertainty-Based Extensible Codebook for Discrete Federated Learning in Heterogeneous Data Silos
Federated learning (FL), aimed at leveraging vast distributed datasets, confronts a crucial challenge: the heterogeneity of data across different silos.