no code implementations • 15 Apr 2024 • Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Christopher G. Brinton
Specifically, we introduce a \textit{smart information push-pull} methodology for data/embedding exchange tailored to FL settings with either soft or strict data privacy restrictions.
no code implementations • 15 Feb 2024 • Seohyun Lee, Anindya Bijoy Das, Satyavrat Wagle, Christopher G. Brinton
Numerical analysis shows the advantages in terms of convergence speed and straggler resilience of the proposed method to different available FL schemes and benchmark datasets.
no code implementations • 7 Aug 2023 • Satyavrat Wagle, Anindya Bijoy Das, David J. Love, Christopher G. Brinton
Augmenting federated learning (FL) with direct device-to-device (D2D) communications can help improve convergence speed and reduce model bias through rapid local information exchange.
no code implementations • 4 Aug 2022 • Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Mung Chiang, Christopher G. Brinton
In most of the current literature, FL has been studied for supervised ML tasks, in which edge devices collect labeled data.
no code implementations • 17 Apr 2020 • Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong
Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.
Distributed, Parallel, and Cluster Computing