no code implementations • 8 Feb 2024 • Elsa Rizk, Kun Yuan, Ali H. Sayed
In this work, we examine a network of agents operating asynchronously, aiming to discover an ideal global model that suits individual local datasets.
no code implementations • 23 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
no code implementations • 3 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
This work focuses on adversarial learning over graphs.
no code implementations • 16 Jan 2023 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the privatization of distributed learning and optimization strategies.
no code implementations • 26 Oct 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the generation of dependent random numbers in a distributed fashion in order to enable privatized distributed learning by networked agents.
no code implementations • 14 Mar 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning is a semi-distributed algorithm, where a server communicates with multiple dispersed clients to learn a global model.
no code implementations • 26 Apr 2021 • Elsa Rizk, Ali H. Sayed
Thus in this work, we develop a private multi-server federated learning scheme, which we call graph federated learning.
no code implementations • 14 Dec 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning encapsulates distributed learning strategies that are managed by a central unit.
no code implementations • 2 Dec 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy.
no code implementations • 26 Oct 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates.
no code implementations • 4 Apr 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
no code implementations • 20 Feb 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
no code implementations • 30 Oct 2019 • Elsa Rizk, Roula Nassif, Ali H. Sayed
This work introduces two strategies for training network classifiers with heterogeneous agents.