no code implementations • 22 Aug 2023 • Amirhossein Reisizadeh, Khashayar Gatmiry, Asuman Ozdaglar
In many settings however, heterogeneous data may be generated in clusters with shared structures, as is the case in several applications such as federated learning where a common latent variable governs the distribution of all the samples generated by a client.
1 code implementation • 2 Mar 2023 • Amirhossein Reisizadeh, Haochuan Li, Subhro Das, Ali Jadbabaie
This is in clear contrast to the well-established assumption in folklore non-convex optimization, a. k. a.
no code implementations • 16 Jun 2022 • Romain Cosson, Ali Jadbabaie, Anuran Makur, Amirhossein Reisizadeh, Devavrat Shah
When $r \ll p$, these complexities are smaller than the known complexities of $\mathcal{O}(p \log(1/\epsilon))$ and $\mathcal{O}(p/\epsilon^2)$ of {\gd} in the strongly convex and non-convex settings, respectively.
no code implementations • 6 Jun 2022 • Farzan Farnia, Amirhossein Reisizadeh, Ramtin Pedarsani, Ali Jadbabaie
In this paper, we focus on this problem and propose a novel personalized Federated Learning scheme based on Optimal Transport (FedOT) as a learning algorithm that learns the optimal transport maps for transferring data points to a common distribution as well as the prediction model under the applied transport map.
no code implementations • 28 Dec 2020 • Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani
Federated Learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local.
no code implementations • NeurIPS 2020 • Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, Ali Jadbabaie
In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model.
no code implementations • 28 Sep 2019 • Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized.
1 code implementation • NeurIPS 2019 • Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively.
no code implementations • 6 Feb 2019 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
That is, it parallelizes the communications over a tree topology leading to efficient bandwidth utilization, and carefully designs a redundant data set allocation and coding strategy at the nodes to make the proposed gradient aggregation scheme robust to stragglers.
no code implementations • 29 Jun 2018 • Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
We consider the problem of decentralized consensus optimization, where the sum of $n$ smooth and strongly convex functions are minimized over $n$ distributed agents that form a connected network.
1 code implementation • 21 Jan 2017 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory