no code implementations • 6 Jul 2024 • Ainur Zhaikhan, Ali H. Sayed
This study proposes the use of a social learning method to estimate a global state within a multi-agent off-policy actor-critic algorithm for reinforcement learning (RL) operating in a partially observable environment.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 28 Jun 2024 • Ying Cao, Zhaoxian Wu, Kun Yuan, Ali H. Sayed
This paper proposes a theoretical framework to evaluate and compare the performance of gradient-descent algorithms for distributed learning in relation to their behavior around local minima in nonconvex environments.
no code implementations • 26 Jun 2024 • Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
The results establish that, in the small step-size regime and with a finite number of bits, it is possible to attain the performance achievable in the absence of compression.
no code implementations • 18 Jun 2024 • Haoyuan Cai, Sulaiman A. Alghunaim, Ali H. Sayed
Lower-bound analyses for nonconvex strongly-concave minimax optimization problems have shown that stochastic first-order algorithms require at least $\mathcal{O}(\varepsilon^{-4})$ oracle complexity to find an $\varepsilon$-stationary point.
no code implementations • 2 May 2024 • Mert Kayaalp, Yunus Inan, Visa Koivunen, Ali H. Sayed
To evaluate how each agent influences the overall decision, we adopt a causal framework in order to distinguish the actual influence of agents from mere correlations within the decision-making process.
no code implementations • 8 Feb 2024 • Elsa Rizk, Kun Yuan, Ali H. Sayed
In this work, we examine a network of agents operating asynchronously, aiming to discover an ideal global model that suits individual local datasets.
1 code implementation • 26 Jan 2024 • Haoyuan Cai, Sulaiman A. Alghunaim, Ali H. Sayed
The optimistic gradient method is useful in addressing minimax optimization problems.
no code implementations • 13 Jul 2023 • Mert Kayaalp, Ali H. Sayed
This paper investigates causal influences between agents linked by a social graph and interacting over time.
no code implementations • 15 Jun 2023 • Ping Hu, Virginia Bordignon, Mert Kayaalp, Ali H. Sayed
This paper studies the probability of error associated with the social machine learning framework, which involves an independent training phase followed by a cooperative decision-making phase over a graph.
no code implementations • 19 Apr 2023 • Ainur Zhaikhan, Ali H. Sayed
Moreover, the proposed scheme allows agents to communicate in a fully decentralized manner with minimal information exchange.
no code implementations • 7 Apr 2023 • Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
In this work we derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
no code implementations • 23 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
no code implementations • 10 Mar 2023 • Mert Kayaalp, Yunus Inan, Visa Koivunen, Emre Telatar, Ali H. Sayed
We consider the problem of information aggregation in federated decision making, where a group of agents collaborate to infer the underlying state of nature without sharing their private data with the central processor or each other.
no code implementations • 3 Mar 2023 • Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
This work focuses on adversarial learning over graphs.
1 code implementation • 8 Feb 2023 • Mert Kayaalp, Fatima Ghadieh, Ali H. Sayed
As a remedy, we propose a fully decentralized belief forming strategy that relies on individual updates and on localized interactions over a communication network.
no code implementations • 25 Jan 2023 • Michele Cirillo, Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
We devise a novel learning strategy where each agent forms a valid belief by completing the partial beliefs received from its neighbors.
no code implementations • 16 Jan 2023 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the privatization of distributed learning and optimization strategies.
no code implementations • 5 Dec 2022 • Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
This work studies networked agents cooperating to track a dynamical state of nature under partial information.
no code implementations • 26 Oct 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
We study the generation of dependent random numbers in a distributed fashion in order to enable privatized distributed learning by networked agents.
no code implementations • 25 Oct 2022 • Stefan Vlaski, Soummya Kar, Ali H. Sayed, José M. F. Moura
Moreover, and significantly, theory and applications show that networked agents, through cooperation and sharing, are able to match the performance of cloud or federated solutions, while offering the potential for improved privacy, increasing resilience, and saving resources.
no code implementations • 16 Sep 2022 • Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.
no code implementations • 28 Apr 2022 • Mert Kayaalp, Yunus Inan, Emre Telatar, Ali H. Sayed
We study the asymptotic learning rates under linear and log-linear combination rules of belief vectors in a distributed hypothesis testing problem.
no code implementations • 18 Mar 2022 • Roula Nassif, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Observations collected by agents in a network may be unreliable due to observation noise or interference.
no code implementations • 14 Mar 2022 • Ping Hu, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Adaptive social learning is a useful tool for studying distributed decision-making problems over graphs.
no code implementations • 14 Mar 2022 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning is a semi-distributed algorithm, where a server communicates with multiple dispersed clients to learn a global model.
no code implementations • 14 Mar 2022 • Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed
Social learning algorithms provide models for the formation of opinions over social networks resulting from local reasoning and peer-to-peer exchanges.
no code implementations • 11 Mar 2022 • Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed
For a given graph topology, these algorithms allow for the prediction of formed opinions.
no code implementations • 4 Mar 2022 • Mert Kayaalp, Virginia Bordignon, Ali H. Sayed
We show that agents can learn the true hypothesis even if they do not discuss it, at rates comparable to traditional social learning.
no code implementations • 22 Dec 2021 • Jerónimo Arenas-García, Luis A. Azpicueta-Ruiz, Magno T. M. Silva, Vitor H. Nascimento, Ali H. Sayed
Adaptive filters are at the core of many signal processing applications, ranging from acoustic noise supression to echo cancelation, array beamforming, channel equalization, to more recent sensor network applications in surveillance, target localization, and tracking.
1 code implementation • 17 Dec 2021 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
In the proposed social machine learning (SML) strategy, two phases are present: in the training phase, classifiers are independently trained to generate a belief over a set of hypotheses using a finite number of training samples; in the prediction phase, classifiers evaluate streaming unlabeled observations and share their instantaneous beliefs with neighboring classifiers.
no code implementations • 3 Dec 2021 • Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors.
no code implementations • 26 Nov 2021 • Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
This work proposes a multi-agent filtering algorithm over graphs for finite-state hidden Markov models (HMMs), which can be used for sequential state estimation or for tracking opinion formation over dynamic social networks.
no code implementations • 26 Apr 2021 • Elsa Rizk, Ali H. Sayed
Thus in this work, we develop a private multi-server federated learning scheme, which we call graph federated learning.
no code implementations • 29 Mar 2021 • Stefan Vlaski, Ali H. Sayed
Adaptive networks have the capability to pursue solutions of global stochastic optimization problems by relying only on local interactions within neighborhoods.
no code implementations • 26 Mar 2021 • Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose.
no code implementations • 14 Dec 2020 • Stefano Marano, Ali H. Sayed
This work focuses on the development of a new family of decision-making algorithms for adaptation and learning, which are specifically tailored to decision problems and are constructed by building up on first principles from decision theory.
no code implementations • 14 Dec 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning encapsulates distributed learning strategies that are managed by a central unit.
no code implementations • 2 Dec 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy.
no code implementations • 26 Oct 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates.
no code implementations • 26 Oct 2020 • Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed
A common assumption in the social learning literature is that agents exchange information in an unselfish manner.
no code implementations • 23 Oct 2020 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.
no code implementations • 23 Oct 2020 • Stefan Vlaski, Ali H. Sayed
Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information as a result of repeated local exchanges of intermediate estimates.
no code implementations • 6 Oct 2020 • Mert Kayaalp, Stefan Vlaski, Ali H. Sayed
The formalism of meta-learning is actually well-suited to this decentralized setting, where the learner would be able to benefit from information and computational power spread across the agents.
1 code implementation • 24 Jun 2020 • Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
Instead of sharing the entirety of their beliefs, this work considers the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draws conditions under which this policy does not affect truth learning.
no code implementations • 15 Jun 2020 • Sulaiman A. Alghunaim, Ming Yan, Ali H. Sayed
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents.
no code implementations • 5 Jun 2020 • Lucas Cassano, Ali H. Sayed
We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work.
no code implementations • 4 Apr 2020 • Stefan Vlaski, Elsa Rizk, Ali H. Sayed
The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
no code implementations • 31 Mar 2020 • Stefan Vlaski, Ali H. Sayed
Rapid advances in data collection and processing capabilities have allowed for the use of increasingly complex models that give rise to nonconvex optimization problems.
no code implementations • 20 Feb 2020 • Elsa Rizk, Stefan Vlaski, Ali H. Sayed
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
no code implementations • 7 Jan 2020 • Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed
Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.
no code implementations • 18 Dec 2019 • Vincenzo Matta, Augusto Santos, Ali H. Sayed
Many optimization, inference and learning tasks can be accomplished efficiently by means of decentralized processing algorithms where the network topology (i. e., the graph) plays a critical role in enabling the interactions among neighboring nodes.
1 code implementation • 30 Oct 2019 • Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
This work studies the learning abilities of agents sharing partial beliefs over social networks.
Signal Processing Multiagent Systems
no code implementations • 30 Oct 2019 • Stefan Vlaski, Ali H. Sayed
Under appropriate cooperation protocols and parameter choices, fully decentralized solutions for stochastic optimization have been shown to match the performance of centralized solutions and result in linear speedup (in the number of agents) relative to non-cooperative approaches in the strongly-convex setting.
no code implementations • 30 Oct 2019 • Elsa Rizk, Roula Nassif, Ali H. Sayed
This work introduces two strategies for training network classifiers with heterogeneous agents.
no code implementations • 20 Sep 2019 • Stefan Vlaski, Lieven Vandenberghe, Ali H. Sayed
The purpose of this work is to develop and study a distributed strategy for Pareto optimization of an aggregate cost consisting of regularized risks.
1 code implementation • 13 Sep 2019 • Lucas Cassano, Ali H. Sayed
In this article we explore an alternative approach to address deep exploration and we introduce the ISL algorithm, which is efficient at performing deep exploration.
no code implementations • 19 Aug 2019 • Stefan Vlaski, Ali H. Sayed
Recent years have seen increased interest in performance guarantees of gradient descent algorithms for non-convex optimization.
no code implementations • 3 Jul 2019 • Stefan Vlaski, Ali H. Sayed
In Part I [2] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point.
no code implementations • 5 Apr 2019 • Vincenzo Matta, Augusto Santos, Ali H. Sayed
This claim is proved for three matrix estimators: i) the Granger estimator that adapts to the partial observability setting the solution that is exact under full observability ; ii) the one-lag correlation matrix; and iii) the residual estimator based on the difference between two consecutive time samples.
no code implementations • 26 Mar 2019 • Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed
It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.
no code implementations • 20 Dec 2018 • Sahar Khawatmi, Abdelhak M. Zoubir, Ali H. Sayed
In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network.
no code implementations • 17 Oct 2018 • Lucas Cassano, Kun Yuan, Ali H. Sayed
In this scenario, agents collaborate to estimate the value function of a target team policy.
no code implementations • 6 Jun 2018 • Hadi Ghauch, Mikael Skoglund, Hossein Shokri-Ghadikolaei, Carlo Fischione, Ali H. Sayed
We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables.
BIG-bench Machine Learning Interpretable Machine Learning +1
no code implementations • 29 May 2018 • Bicheng Ying, Kun Yuan, Ali H. Sayed
This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios.
no code implementations • 21 Mar 2018 • Bicheng Ying, Kun Yuan, Stefan Vlaski, Ali H. Sayed
In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly.
no code implementations • 23 Dec 2017 • Sulaiman A. Alghunaim, Ali H. Sayed
In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally.
Optimization and Control
no code implementations • 4 Aug 2017 • Bicheng Ying, Kun Yuan, Ali H. Sayed
First, it resolves this open issue and provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA; the argument is also adaptable to other variance-reduced algorithms.
no code implementations • 4 Aug 2017 • Kun Yuan, Bicheng Ying, Jiageng Liu, Ali H. Sayed
For such situations, the balanced gradient computation property of AVRG becomes a real advantage in reducing idle time caused by unbalanced local data storage requirements, which is characteristic of other reduced-variance gradient algorithms.
no code implementations • 20 Apr 2017 • Bicheng Ying, Ali H. Sayed
The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present.
no code implementations • 13 Feb 2017 • Jie Chen, Cédric Richard, Ali H. Sayed
Online learning with streaming data in a distributed and collaborative manner can be useful in a wide range of applications.
no code implementations • 28 Oct 2016 • Sahar Khawatmi, Ali H. Sayed, Abdelhak M. Zoubir
We consider the problem of decentralized clustering and estimation over multi-task networks, where agents infer and track different models of interest.
no code implementations • 14 Mar 2016 • Kun Yuan, Bicheng Ying, Ali H. Sayed
The article examines in some detail the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime.
no code implementations • 24 Feb 2016 • Bicheng Ying, Kun Yuan, Ali H. Sayed
The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees.
no code implementations • 30 Nov 2015 • Ali H. Sayed, Xiaochuan Zhao
In a recent article [1] we surveyed advances related to adaptation, learning, and optimization over synchronous networks.
no code implementations • 24 Nov 2015 • Bicheng Ying, Ali H. Sayed
In this work and the supporting Part II, we examine the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature.
no code implementations • 4 Dec 2014 • Bicheng Ying, Ali H. Sayed
The paper examines the learning mechanism of adaptive agents over weakly-connected graphs and reveals an interesting behavior on how information flows through such topologies.
no code implementations • 22 Sep 2014 • Xiaochuan Zhao, Ali H. Sayed
In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks.
no code implementations • 16 Aug 2014 • Zaid J. Towfic, Ali H. Sayed
It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies.
no code implementations • 6 Feb 2014 • Jianshu Chen, Zaid J. Towfic, Ali H. Sayed
In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements.
no code implementations • 30 Dec 2013 • Sergio Valcarcel Macua, Jianshu Chen, Santiago Zazo, Ali H. Sayed
We apply diffusion strategies to develop a fully-distributed cooperative reinforcement learning algorithm in which agents in a network communicate only with their immediate neighbors to improve predictions about their environment.
no code implementations • 19 Dec 2013 • Xiaochuan Zhao, Ali H. Sayed
In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks.
no code implementations • 19 Dec 2013 • Xiaochuan Zhao, Ali H. Sayed
First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations.
no code implementations • 19 Dec 2013 • Xiaochuan Zhao, Ali H. Sayed
The expressions reveal how the various parameters of the asynchronous behavior influence network performance.
no code implementations • 18 May 2012 • Ali H. Sayed
The agents are linked together through a connection topology, and they cooperate with each other through local interactions to solve distributed optimization, estimation, and inference problems in real-time.