no code implementations • 28 Mar 2024 • Muhammad Faraz Ul Abrar, Nicolò Michelusi
Recently, Over-the-Air (OTA) computation has emerged as a promising federated learning (FL) paradigm that leverages the waveform superposition properties of the wireless channel to realize fast model updates.
no code implementations • 1 Feb 2024 • Muhammad Faraz Ul Abrar, Nicolò Michelusi
Focusing on a single FL round, we cast the optimal scheduling problem as the minimization of the mean squared error (MSE) on the estimated global gradient at the PS, subject to a delay constraint, yielding the optimal device scheduling configuration and quantization bits for the digital devices.
no code implementations • 22 Mar 2023 • Frank Po-Chen Lin, Seyyedali Hosseinalipour, Nicolò Michelusi, Christopher Brinton
The paper introduces delay-aware hierarchical federated learning (DFL) to improve the efficiency of distributed machine learning (ML) model training by accounting for communication delays between edge and cloud.
1 code implementation • 7 Sep 2021 • Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolò Michelusi
Federated learning has emerged as a popular technique for distributing model training across the network edge.
no code implementations • 23 Jul 2021 • Nicolò Michelusi, Gesualdo Scutari, Chang-Shen Lee
This paper studies distributed algorithms for (strongly convex) composite optimization problems over mesh networks, subject to quantized communications.
no code implementations • 21 Aug 2020 • Frank Po-Chen Lin, Christopher G. Brinton, Nicolò Michelusi
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.