Semi-Synchronous Federated Learning for Energy-Efficient Training and Accelerated Convergence in Cross-Silo Settings

4 Feb 2021  ·  Dimitris Stripelis, Jose Luis Ambite ·

There are situations where data relevant to machine learning problems are distributed across multiple locations that cannot share the data due to regulatory, competitiveness, or privacy reasons. Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing. Federated Learning (FL) is a promising approach to learn a joint model over all the available data across silos. In many cases, the sites participating in a federation have different data distributions and computational capabilities. In these heterogeneous environments, existing approaches exhibit poor performance: synchronous FL protocols are communication efficient, but have slow learning convergence and high energy cost; conversely, asynchronous FL protocols have faster convergence with lower energy cost, but higher communication. In this work, we introduce a novel energy-efficient Semi-Synchronous Federated Learning protocol that mixes local models periodically with minimal idle time and fast convergence. We show through extensive experiments over established benchmark datasets in the computer-vision domain as well as in real-world biomedical settings that our approach significantly outperforms previous work in data and computationally heterogeneous environments.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here