Search Results for author: Roula Nassif

Found 5 papers, 0 papers with code

Exact Subspace Diffusion for Decentralized Multitask Learning

no code implementations14 Apr 2023 Shreya Wadehra, Roula Nassif, Stefan Vlaski

Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents.

Quantization for decentralized learning under subspace constraints

no code implementations16 Sep 2022 Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed

In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.

Quantization

Dencentralized learning in the presence of low-rank noise

no code implementations18 Mar 2022 Roula Nassif, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

Observations collected by agents in a network may be unreliable due to observation noise or interference.

Multitask learning over graphs: An Approach for Distributed, Streaming Machine Learning

no code implementations7 Jan 2020 Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed

Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.

BIG-bench Machine Learning Inductive Bias +1

Network Classifiers With Output Smoothing

no code implementations30 Oct 2019 Elsa Rizk, Roula Nassif, Ali H. Sayed

This work introduces two strategies for training network classifiers with heterogeneous agents.

Cannot find the paper you are looking for? You can Submit a new open access paper.