no code implementations • 16 Jun 2022 • Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson
Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data.
no code implementations • 19 Aug 2021 • Anirban Das, Timothy Castiglia, Shiqiang Wang, Stacy Patterson
Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.
no code implementations • 6 Feb 2021 • Anirban Das, Stacy Patterson
Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients.
no code implementations • ICLR 2021 • Timothy Castiglia, Anirban Das, Stacy Patterson
We propose Multi-Level Local SGD, a distributed stochastic gradient method for learning a smooth, non-convex objective in a multi-level communication network with heterogeneous workers.
1 code implementation • 27 Jul 2020 • Timothy Castiglia, Anirban Das, Stacy Patterson
In our algorithm, sub-networks execute a distributed SGD algorithm, using a hub-and-spoke paradigm, and the hubs periodically average their models with neighboring hubs.
no code implementations • 11 Nov 2019 • Anirban Das, Thomas Brunschwiler
Federated Learning enables training of a general model through edge devices without sending raw data to the cloud.
1 code implementation • 14 Nov 2018 • Anirban Das, Stacy Patterson, Mike P. Wittie
The emerging trend of edge computing has led several cloud providers to release their own platforms for performing computation at the 'edge' of the network.
Networking and Internet Architecture Distributed, Parallel, and Cluster Computing