no code implementations • 14 Apr 2023 • Shreya Wadehra, Roula Nassif, Stefan Vlaski
Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents.
no code implementations • 16 Sep 2022 • Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.
no code implementations • 18 Mar 2022 • Roula Nassif, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Observations collected by agents in a network may be unreliable due to observation noise or interference.
no code implementations • 7 Jan 2020 • Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed
Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.
no code implementations • 30 Oct 2019 • Elsa Rizk, Roula Nassif, Ali H. Sayed
This work introduces two strategies for training network classifiers with heterogeneous agents.