Search Results for author: Ali H. Sayed

Found 79 papers, 6 papers with code

Asynchronous Diffusion Learning with Agent Subsampling and Local Updates

no code implementations8 Feb 2024 Elsa Rizk, Kun Yuan, Ali H. Sayed

In this work, we examine a network of agents operating asynchronously, aiming to discover an ideal global model that suits individual local datasets.

Federated Learning

Diffusion Stochastic Optimization for Min-Max Problems

1 code implementation26 Jan 2024 Haoyuan Cai, Sulaiman A. Alghunaim, Ali H. Sayed

The optimistic gradient method is useful in addressing minimax optimization problems.

Stochastic Optimization

Causal Influences over Social Learning Networks

no code implementations13 Jul 2023 Mert Kayaalp, Ali H. Sayed

This paper investigates causal influences between agents linked by a social graph and interacting over time.

Decision Making

Non-Asymptotic Performance of Social Machine Learning Under Limited Data

no code implementations15 Jun 2023 Ping Hu, Virginia Bordignon, Mert Kayaalp, Ali H. Sayed

This paper studies the probability of error associated with the social machine learning framework, which involves an independent training phase followed by a cooperative decision-making phase over a graph.

Classification Decision Making

Graph Exploration for Effective Multi-agent Q-Learning

no code implementations19 Apr 2023 Ainur Zhaikhan, Ali H. Sayed

Moreover, the proposed scheme allows agents to communicate in a fully decentralized manner with minimal information exchange.

Multi-agent Reinforcement Learning Q-Learning

Compressed Regression over Adaptive Networks

no code implementations7 Apr 2023 Marco Carpentiero, Vincenzo Matta, Ali H. Sayed

In this work we derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.

regression

Decentralized Adversarial Training over Graphs

no code implementations23 Mar 2023 Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed

The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.

On the Fusion Strategies for Federated Decision Making

no code implementations10 Mar 2023 Mert Kayaalp, Yunus Inan, Visa Koivunen, Emre Telatar, Ali H. Sayed

We consider the problem of information aggregation in federated decision making, where a group of agents collaborate to infer the underlying state of nature without sharing their private data with the central processor or each other.

Decision Making

Policy Evaluation in Decentralized POMDPs with Belief Sharing

1 code implementation8 Feb 2023 Mert Kayaalp, Fatima Ghadieh, Ali H. Sayed

As a remedy, we propose a fully decentralized belief forming strategy that relies on individual updates and on localized interactions over a communication network.

Multi-agent Reinforcement Learning

Memory-Aware Social Learning under Partial Information Sharing

no code implementations25 Jan 2023 Michele Cirillo, Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

We devise a novel learning strategy where each agent forms a valid belief by completing the partial beliefs received from its neighbors.

valid

Enforcing Privacy in Distributed Learning with Performance Guarantees

no code implementations16 Jan 2023 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

We study the privatization of distributed learning and optimization strategies.

Distributed Bayesian Learning of Dynamic States

no code implementations5 Dec 2022 Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

This work studies networked agents cooperating to track a dynamical state of nature under partial information.

Local Graph-homomorphic Processing for Privatized Distributed Systems

no code implementations26 Oct 2022 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

We study the generation of dependent random numbers in a distributed fashion in order to enable privatized distributed learning by networked agents.

Networked Signal and Information Processing

no code implementations25 Oct 2022 Stefan Vlaski, Soummya Kar, Ali H. Sayed, José M. F. Moura

Moreover, and significantly, theory and applications show that networked agents, through cooperation and sharing, are able to match the performance of cloud or federated solutions, while offering the potential for improved privacy, increasing resilience, and saving resources.

Decision Making Inference Optimization

Quantization for decentralized learning under subspace constraints

no code implementations16 Sep 2022 Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed

In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.

Quantization

On the Arithmetic and Geometric Fusion of Beliefs for Distributed Inference

no code implementations28 Apr 2022 Mert Kayaalp, Yunus Inan, Emre Telatar, Ali H. Sayed

We study the asymptotic learning rates under linear and log-linear combination rules of belief vectors in a distributed hypothesis testing problem.

Dencentralized learning in the presence of low-rank noise

no code implementations18 Mar 2022 Roula Nassif, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

Observations collected by agents in a network may be unreliable due to observation noise or interference.

Optimal Aggregation Strategies for Social Learning over Graphs

no code implementations14 Mar 2022 Ping Hu, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

Adaptive social learning is a useful tool for studying distributed decision-making problems over graphs.

Decision Making

Privatized Graph Federated Learning

no code implementations14 Mar 2022 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning is a semi-distributed algorithm, where a server communicates with multiple dispersed clients to learn a global model.

Federated Learning

Explainability and Graph Learning from Social Interactions

no code implementations14 Mar 2022 Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed

Social learning algorithms provide models for the formation of opinions over social networks resulting from local reasoning and peer-to-peer exchanges.

Graph Learning

Online Graph Learning from Social Interactions

no code implementations11 Mar 2022 Valentina Shumovskaia, Konstantinos Ntemos, Stefan Vlaski, Ali H. Sayed

For a given graph topology, these algorithms allow for the prediction of formed opinions.

Graph Learning

Social Opinion Formation and Decision Making Under Communication Trends

no code implementations4 Mar 2022 Mert Kayaalp, Virginia Bordignon, Ali H. Sayed

We show that agents can learn the true hypothesis even if they do not discuss it, at rates comparable to traditional social learning.

Decision Making

Combinations of Adaptive Filters

no code implementations22 Dec 2021 Jerónimo Arenas-García, Luis A. Azpicueta-Ruiz, Magno T. M. Silva, Vitor H. Nascimento, Ali H. Sayed

Adaptive filters are at the core of many signal processing applications, ranging from acoustic noise supression to echo cancelation, array beamforming, channel equalization, to more recent sensor network applications in surveillance, target localization, and tracking.

Learning from Heterogeneous Data Based on Social Interactions over Graphs

1 code implementation17 Dec 2021 Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

In the proposed social machine learning (SML) strategy, two phases are present: in the training phase, classifiers are independently trained to generate a belief over a set of hypotheses using a finite number of training samples; in the prediction phase, classifiers evaluate streaming unlabeled observations and share their instantaneous beliefs with neighboring classifiers.

Decision Making

Distributed Adaptive Learning Under Communication Constraints

no code implementations3 Dec 2021 Marco Carpentiero, Vincenzo Matta, Ali H. Sayed

We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors.

Hidden Markov Modeling over Graphs

no code implementations26 Nov 2021 Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

This work proposes a multi-agent filtering algorithm over graphs for finite-state hidden Markov models (HMMs), which can be used for sequential state estimation or for tracking opinion formation over dynamic social networks.

A Graph Federated Architecture with Privacy Preserving Learning

no code implementations26 Apr 2021 Elsa Rizk, Ali H. Sayed

Thus in this work, we develop a private multi-server federated learning scheme, which we call graph federated learning.

Federated Learning Privacy Preserving

Competing Adaptive Networks

no code implementations29 Mar 2021 Stefan Vlaski, Ali H. Sayed

Adaptive networks have the capability to pursue solutions of global stochastic optimization problems by relying only on local interactions within neighborhoods.

Stochastic Optimization

Deception in Social Learning

no code implementations26 Mar 2021 Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose.

Decision-Making Algorithms for Learning and Adaptation with Application to COVID-19 Data

no code implementations14 Dec 2020 Stefano Marano, Ali H. Sayed

This work focuses on the development of a new family of decision-making algorithms for adaptation and learning, which are specifically tailored to decision problems and are constructed by building up on first principles from decision theory.

Decision Making

Federated Learning under Importance Sampling

no code implementations14 Dec 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning encapsulates distributed learning strategies that are managed by a central unit.

Federated Learning

Second-Order Guarantees in Federated Learning

no code implementations2 Dec 2020 Stefan Vlaski, Elsa Rizk, Ali H. Sayed

Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy.

Federated Learning

Optimal Importance Sampling for Federated Learning

no code implementations26 Oct 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates.

Federated Learning regression

Social learning under inferential attacks

no code implementations26 Oct 2020 Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

A common assumption in the social learning literature is that agents exchange information in an unselfish manner.

Graph-Homomorphic Perturbations for Private Decentralized Learning

no code implementations23 Oct 2020 Stefan Vlaski, Ali H. Sayed

Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information as a result of repeated local exchanges of intermediate estimates.

Privacy Preserving Stochastic Optimization

Network Classifiers Based on Social Learning

no code implementations23 Oct 2020 Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.

Dif-MAML: Decentralized Multi-Agent Meta-Learning

no code implementations6 Oct 2020 Mert Kayaalp, Stefan Vlaski, Ali H. Sayed

The formalism of meta-learning is actually well-suited to this decentralized setting, where the learner would be able to benefit from information and computational power spread across the agents.

Meta-Learning

Partial Information Sharing over Social Learning Networks

1 code implementation24 Jun 2020 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

Instead of sharing the entirety of their beliefs, this work considers the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draws conditions under which this policy does not affect truth learning.

A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features

no code implementations15 Jun 2020 Sulaiman A. Alghunaim, Ming Yan, Ali H. Sayed

This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents.

regression

Logical Team Q-learning: An approach towards factored policies in cooperative MARL

no code implementations5 Jun 2020 Lucas Cassano, Ali H. Sayed

We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work.

Q-Learning

Tracking Performance of Online Stochastic Learners

no code implementations4 Apr 2020 Stefan Vlaski, Elsa Rizk, Ali H. Sayed

The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.

Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization

no code implementations31 Mar 2020 Stefan Vlaski, Ali H. Sayed

Rapid advances in data collection and processing capabilities have allowed for the use of increasingly complex models that give rise to nonconvex optimization problems.

Dynamic Federated Learning

no code implementations20 Feb 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.

Federated Learning

Multitask learning over graphs: An Approach for Distributed, Streaming Machine Learning

no code implementations7 Jan 2020 Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed

Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.

BIG-bench Machine Learning Inductive Bias +1

Graph Learning Under Partial Observability

no code implementations18 Dec 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

Many optimization, inference and learning tasks can be accomplished efficiently by means of decentralized processing algorithms where the network topology (i. e., the graph) plays a critical role in enabling the interactions among neighboring nodes.

Distributed Optimization Graph Learning

Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

no code implementations30 Oct 2019 Stefan Vlaski, Ali H. Sayed

Under appropriate cooperation protocols and parameter choices, fully decentralized solutions for stochastic optimization have been shown to match the performance of centralized solutions and result in linear speedup (in the number of agents) relative to non-cooperative approaches in the strongly-convex setting.

Stochastic Optimization

Network Classifiers With Output Smoothing

no code implementations30 Oct 2019 Elsa Rizk, Roula Nassif, Ali H. Sayed

This work introduces two strategies for training network classifiers with heterogeneous agents.

Social Learning with Partial Information Sharing

1 code implementation30 Oct 2019 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

This work studies the learning abilities of agents sharing partial beliefs over social networks.

Signal Processing Multiagent Systems

Regularized Diffusion Adaptation via Conjugate Smoothing

no code implementations20 Sep 2019 Stefan Vlaski, Lieven Vandenberghe, Ali H. Sayed

The purpose of this work is to develop and study a distributed strategy for Pareto optimization of an aggregate cost consisting of regularized risks.

ISL: A novel approach for deep exploration

1 code implementation13 Sep 2019 Lucas Cassano, Ali H. Sayed

In this article we explore an alternative approach to address deep exploration and we introduce the ISL algorithm, which is efficient at performing deep exploration.

Q-Learning

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

no code implementations19 Aug 2019 Stefan Vlaski, Ali H. Sayed

Recent years have seen increased interest in performance guarantees of gradient descent algorithms for non-convex optimization.

Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-Points

no code implementations3 Jul 2019 Stefan Vlaski, Ali H. Sayed

In Part I [2] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point.

Graph Learning over Partially Observed Diffusion Networks: Role of Degree Concentration

no code implementations5 Apr 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

This claim is proved for three matrix estimators: i) the Granger estimator that adapts to the partial observability setting the solution that is exact under full observability ; ii) the one-lag correlation matrix; and iii) the residual estimator based on the difference between two consecutive time samples.

Clustering Graph Learning

On the Influence of Bias-Correction on Distributed Stochastic Optimization

no code implementations26 Mar 2019 Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed

It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.

Stochastic Optimization

Decentralized Decision-Making Over Multi-Task Networks

no code implementations20 Dec 2018 Sahar Khawatmi, Abdelhak M. Zoubir, Ali H. Sayed

In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network.

Decision Making

Learning Kolmogorov Models for Binary Random Variables

no code implementations6 Jun 2018 Hadi Ghauch, Mikael Skoglund, Hossein Shokri-Ghadikolaei, Carlo Fischione, Ali H. Sayed

We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables.

BIG-bench Machine Learning Interpretable Machine Learning +1

Supervised Learning Under Distributed Features

no code implementations29 May 2018 Bicheng Ying, Kun Yuan, Ali H. Sayed

This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios.

Stochastic Learning under Random Reshuffling with Constant Step-sizes

no code implementations21 Mar 2018 Bicheng Ying, Kun Yuan, Stefan Vlaski, Ali H. Sayed

In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly.

Distributed Coupled Multi-Agent Stochastic Optimization

no code implementations23 Dec 2017 Sulaiman A. Alghunaim, Ali H. Sayed

In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally.

Optimization and Control

Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling

no code implementations4 Aug 2017 Kun Yuan, Bicheng Ying, Jiageng Liu, Ali H. Sayed

For such situations, the balanced gradient computation property of AVRG becomes a real advantage in reducing idle time caused by unbalanced local data storage requirements, which is characteristic of other reduced-variance gradient algorithms.

Variance-Reduced Stochastic Learning under Random Reshuffling

no code implementations4 Aug 2017 Bicheng Ying, Kun Yuan, Ali H. Sayed

First, it resolves this open issue and provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA; the argument is also adaptable to other variance-reduced algorithms.

Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

no code implementations20 Apr 2017 Bicheng Ying, Ali H. Sayed

The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present.

Stochastic Optimization

Multitask diffusion adaptation over networks with common latent representations

no code implementations13 Feb 2017 Jie Chen, Cédric Richard, Ali H. Sayed

Online learning with streaming data in a distributed and collaborative manner can be useful in a wide range of applications.

Decentralized Clustering and Linking by Networked Agents

no code implementations28 Oct 2016 Sahar Khawatmi, Ali H. Sayed, Abdelhak M. Zoubir

We consider the problem of decentralized clustering and estimation over multi-task networks, where agents infer and track different models of interest.

Clustering

On the Influence of Momentum Acceleration on Online Learning

no code implementations14 Mar 2016 Kun Yuan, Bicheng Ying, Ali H. Sayed

The article examines in some detail the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime.

Online Dual Coordinate Ascent Learning

no code implementations24 Feb 2016 Bicheng Ying, Kun Yuan, Ali H. Sayed

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees.

Asynchronous adaptive networks

no code implementations30 Nov 2015 Ali H. Sayed, Xiaochuan Zhao

In a recent article [1] we surveyed advances related to adaptation, learning, and optimization over synchronous networks.

Performance Limits of Stochastic Sub-Gradient Learning, Part I: Single Agent Case

no code implementations24 Nov 2015 Bicheng Ying, Ali H. Sayed

In this work and the supporting Part II, we examine the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature.

Denoising

Information Exchange and Learning Dynamics over Weakly-Connected Adaptive Networks

no code implementations4 Dec 2014 Bicheng Ying, Ali H. Sayed

The paper examines the learning mechanism of adaptive agents over weakly-connected graphs and reveals an interesting behavior on how information flows through such topologies.

Clustering

Distributed Clustering and Learning Over Networks

no code implementations22 Sep 2014 Xiaochuan Zhao, Ali H. Sayed

In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks.

Clustering

Stability and Performance Limits of Adaptive Primal-Dual Networks

no code implementations16 Aug 2014 Zaid J. Towfic, Ali H. Sayed

It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies.

Stochastic Optimization

Dictionary Learning over Distributed Models

no code implementations6 Feb 2014 Jianshu Chen, Zaid J. Towfic, Ali H. Sayed

In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements.

Collaborative Inference Dictionary Learning

Distributed Policy Evaluation Under Multiple Behavior Strategies

no code implementations30 Dec 2013 Sergio Valcarcel Macua, Jianshu Chen, Santiago Zazo, Ali H. Sayed

We apply diffusion strategies to develop a fully-distributed cooperative reinforcement learning algorithm in which agents in a network communicate only with their immediate neighbors to improve predictions about their environment.

Asynchronous Adaptation and Learning over Networks --- Part I: Modeling and Stability Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks.

Distributed Optimization

Asynchronous Adaptation and Learning over Networks - Part II: Performance Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

The expressions reveal how the various parameters of the asynchronous behavior influence network performance.

Distributed Optimization

Asynchronous Adaptation and Learning over Networks - Part III: Comparison Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations.

Diffusion Adaptation over Networks

no code implementations18 May 2012 Ali H. Sayed

The agents are linked together through a connection topology, and they cooperate with each other through local interactions to solve distributed optimization, estimation, and inference problems in real-time.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.