# FedMorph: Communication Efficient Federated Learning via Morphing Neural Network

29 Sep 2021  ·  , , ·

The two fundamental bottlenecks in Federated Learning (FL) are communication and computation on heterogeneous edge networks, restricting both model capacity and user participation. To address these issues, we present FedMorph, an approach to automatically morph the global neural network to a sub-network to reduce both the communication and local computation overloads. FedMorph distills a fresh sub-network from the original one at the beginning of each communication round while keeps its `knowledge' as similar as the aggregated model from local clients in a federated average (FedAvg) like way. The network morphing process considers the constraints, e.g., model size or computation flops, as an extra regularizer to the objective function. To make the objective function solvable, we relax the model with the concept of soft-mask. We empirically show that FedMorph, without any other tricks, reduces communication and computation overloads and increases the generalization accuracy. E.g., it provides an $85\times$ reduction in server-to-client communication and $18\times$ reduction in local device computation on the MNIST dataset with ResNet8 as the training network. With benchmark compression approaches, e.g., TopK sparsification, FedMorph collectively provides an $847\times$ reduction in upload communication.

PDF Abstract

## Code Add Remove Mark official

No code implementations yet. Submit your code now

## Datasets

Add Datasets introduced or used in this paper

## Results from the Paper Add Remove

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.