Search Results for author: Ali H. Sayed

Found 53 papers, 2 papers with code

Hidden Markov Modeling over Graphs

no code implementations26 Nov 2021 Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

This work proposes a multi-agent filtering algorithm over graphs for finite-state hidden Markov models (HMMs), which can be used for sequential state estimation or for tracking opinion formation over dynamic social networks.

A Graph Federated Architecture with Privacy Preserving Learning

no code implementations26 Apr 2021 Elsa Rizk, Ali H. Sayed

Thus in this work, we develop a private multi-server federated learning scheme, which we call graph federated learning.

Federated Learning

Competing Adaptive Networks

no code implementations29 Mar 2021 Stefan Vlaski, Ali H. Sayed

Adaptive networks have the capability to pursue solutions of global stochastic optimization problems by relying only on local interactions within neighborhoods.

Stochastic Optimization

Deception in Social Learning

no code implementations26 Mar 2021 Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose.

Decision-Making Algorithms for Learning and Adaptation with Application to COVID-19 Data

no code implementations14 Dec 2020 Stefano Marano, Ali H. Sayed

This work focuses on the development of a new family of decision-making algorithms for adaptation and learning, which are specifically tailored to decision problems and are constructed by building up on first principles from decision theory.

Decision Making

Federated Learning under Importance Sampling

no code implementations14 Dec 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning encapsulates distributed learning strategies that are managed by a central unit.

Federated Learning

Second-Order Guarantees in Federated Learning

no code implementations2 Dec 2020 Stefan Vlaski, Elsa Rizk, Ali H. Sayed

Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy.

Federated Learning

Optimal Importance Sampling for Federated Learning

no code implementations26 Oct 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients for their learning updates.

Federated Learning

Social learning under inferential attacks

no code implementations26 Oct 2020 Konstantinos Ntemos, Virginia Bordignon, Stefan Vlaski, Ali H. Sayed

A common assumption in the social learning literature is that agents exchange information in an unselfish manner.

Graph-Homomorphic Perturbations for Private Decentralized Learning

no code implementations23 Oct 2020 Stefan Vlaski, Ali H. Sayed

Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information as a result of repeated local exchanges of intermediate estimates.

Stochastic Optimization

Network Classifiers Based on Social Learning

no code implementations23 Oct 2020 Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.

Dif-MAML: Decentralized Multi-Agent Meta-Learning

no code implementations6 Oct 2020 Mert Kayaalp, Stefan Vlaski, Ali H. Sayed

The formalism of meta-learning is actually well-suited to this decentralized setting, where the learner would be able to benefit from information and computational power spread across the agents.

Meta-Learning

Social Learning with Partial Information Sharing

no code implementations24 Jun 2020 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

Instead of sharing the entirety of their beliefs, this work considers the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draws conditions under which this policy does not affect truth learning.

A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features

no code implementations15 Jun 2020 Sulaiman A. Alghunaim, Ming Yan, Ali H. Sayed

This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents.

Logical Team Q-learning: An approach towards factored policies in cooperative MARL

no code implementations5 Jun 2020 Lucas Cassano, Ali H. Sayed

We derive LTQL as a stochastic approximation to a dynamic programming method we introduce in this work.

Q-Learning

Tracking Performance of Online Stochastic Learners

no code implementations4 Apr 2020 Stefan Vlaski, Elsa Rizk, Ali H. Sayed

The utilization of online stochastic algorithms is popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.

Second-Order Guarantees in Centralized, Federated and Decentralized Nonconvex Optimization

no code implementations31 Mar 2020 Stefan Vlaski, Ali H. Sayed

Rapid advances in data collection and processing capabilities have allowed for the use of increasingly complex models that give rise to nonconvex optimization problems.

Dynamic Federated Learning

no code implementations20 Feb 2020 Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.

Federated Learning

Multitask learning over graphs: An Approach for Distributed, Streaming Machine Learning

no code implementations7 Jan 2020 Roula Nassif, Stefan Vlaski, Cedric Richard, Jie Chen, Ali H. Sayed

Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem) and helps improve generalization performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias.

Transfer Learning

Graph Learning Under Partial Observability

no code implementations18 Dec 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

Many optimization, inference and learning tasks can be accomplished efficiently by means of decentralized processing algorithms where the network topology (i. e., the graph) plays a critical role in enabling the interactions among neighboring nodes.

Distributed Optimization Graph Learning

Network Classifiers With Output Smoothing

no code implementations30 Oct 2019 Elsa Rizk, Roula Nassif, Ali H. Sayed

This work introduces two strategies for training network classifiers with heterogeneous agents.

Social Learning with Partial Information Sharing

1 code implementation30 Oct 2019 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

This work studies the learning abilities of agents sharing partial beliefs over social networks.

Signal Processing Multiagent Systems

Linear Speedup in Saddle-Point Escape for Decentralized Non-Convex Optimization

no code implementations30 Oct 2019 Stefan Vlaski, Ali H. Sayed

Under appropriate cooperation protocols and parameter choices, fully decentralized solutions for stochastic optimization have been shown to match the performance of centralized solutions and result in linear speedup (in the number of agents) relative to non-cooperative approaches in the strongly-convex setting.

Stochastic Optimization

Regularized Diffusion Adaptation via Conjugate Smoothing

no code implementations20 Sep 2019 Stefan Vlaski, Lieven Vandenberghe, Ali H. Sayed

The purpose of this work is to develop and study a distributed strategy for Pareto optimization of an aggregate cost consisting of regularized risks.

ISL: A novel approach for deep exploration

1 code implementation13 Sep 2019 Lucas Cassano, Ali H. Sayed

In this article we explore an alternative approach to address deep exploration and we introduce the ISL algorithm, which is efficient at performing deep exploration.

Q-Learning

Second-Order Guarantees of Stochastic Gradient Descent in Non-Convex Optimization

no code implementations19 Aug 2019 Stefan Vlaski, Ali H. Sayed

Recent years have seen increased interest in performance guarantees of gradient descent algorithms for non-convex optimization.

Distributed Learning in Non-Convex Environments -- Part II: Polynomial Escape from Saddle-Points

no code implementations3 Jul 2019 Stefan Vlaski, Ali H. Sayed

In Part I [2] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point.

Graph Learning over Partially Observed Diffusion Networks: Role of Degree Concentration

no code implementations5 Apr 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

This claim is proved for three matrix estimators: i) the Granger estimator that adapts to the partial observability setting the solution that is exact under full observability ; ii) the one-lag correlation matrix; and iii) the residual estimator based on the difference between two consecutive time samples.

Graph Learning

On the Influence of Bias-Correction on Distributed Stochastic Optimization

no code implementations26 Mar 2019 Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed

It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.

Stochastic Optimization

Decentralized Decision-Making Over Multi-Task Networks

no code implementations20 Dec 2018 Sahar Khawatmi, Abdelhak M. Zoubir, Ali H. Sayed

In important applications involving multi-task networks with multiple objectives, agents in the network need to decide between these multiple objectives and reach an agreement about which single objective to follow for the network.

Decision Making

Learning Kolmogorov Models for Binary Random Variables

no code implementations6 Jun 2018 Hadi Ghauch, Mikael Skoglund, Hossein Shokri-Ghadikolaei, Carlo Fischione, Ali H. Sayed

We summarize our recent findings, where we proposed a framework for learning a Kolmogorov model, for a collection of binary random variables.

Interpretable Machine Learning Recommendation Systems

Supervised Learning Under Distributed Features

no code implementations29 May 2018 Bicheng Ying, Kun Yuan, Ali H. Sayed

This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios.

Stochastic Learning under Random Reshuffling with Constant Step-sizes

no code implementations21 Mar 2018 Bicheng Ying, Kun Yuan, Stefan Vlaski, Ali H. Sayed

In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly.

Distributed Coupled Multi-Agent Stochastic Optimization

no code implementations23 Dec 2017 Sulaiman A. Alghunaim, Ali H. Sayed

In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally.

Optimization and Control

Variance-Reduced Stochastic Learning under Random Reshuffling

no code implementations4 Aug 2017 Bicheng Ying, Kun Yuan, Ali H. Sayed

First, it resolves this open issue and provides the first theoretical guarantee of linear convergence under random reshuffling for SAGA; the argument is also adaptable to other variance-reduced algorithms.

Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling

no code implementations4 Aug 2017 Kun Yuan, Bicheng Ying, Jiageng Liu, Ali H. Sayed

For such situations, the balanced gradient computation property of AVRG becomes a real advantage in reducing idle time caused by unbalanced local data storage requirements, which is characteristic of other reduced-variance gradient algorithms.

Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

no code implementations20 Apr 2017 Bicheng Ying, Ali H. Sayed

The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present.

Stochastic Optimization

Multitask diffusion adaptation over networks with common latent representations

no code implementations13 Feb 2017 Jie Chen, Cédric Richard, Ali H. Sayed

Online learning with streaming data in a distributed and collaborative manner can be useful in a wide range of applications.

Decentralized Clustering and Linking by Networked Agents

no code implementations28 Oct 2016 Sahar Khawatmi, Ali H. Sayed, Abdelhak M. Zoubir

We consider the problem of decentralized clustering and estimation over multi-task networks, where agents infer and track different models of interest.

On the Influence of Momentum Acceleration on Online Learning

no code implementations14 Mar 2016 Kun Yuan, Bicheng Ying, Ali H. Sayed

The article examines in some detail the convergence rate and mean-square-error performance of momentum stochastic gradient methods in the constant step-size and slow adaptation regime.

Online Dual Coordinate Ascent Learning

no code implementations24 Feb 2016 Bicheng Ying, Kun Yuan, Ali H. Sayed

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees.

Asynchronous adaptive networks

no code implementations30 Nov 2015 Ali H. Sayed, Xiaochuan Zhao

In a recent article [1] we surveyed advances related to adaptation, learning, and optimization over synchronous networks.

Performance Limits of Stochastic Sub-Gradient Learning, Part I: Single Agent Case

no code implementations24 Nov 2015 Bicheng Ying, Ali H. Sayed

In this work and the supporting Part II, we examine the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature.

Denoising

Information Exchange and Learning Dynamics over Weakly-Connected Adaptive Networks

no code implementations4 Dec 2014 Bicheng Ying, Ali H. Sayed

The paper examines the learning mechanism of adaptive agents over weakly-connected graphs and reveals an interesting behavior on how information flows through such topologies.

Distributed Clustering and Learning Over Networks

no code implementations22 Sep 2014 Xiaochuan Zhao, Ali H. Sayed

In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks.

Stability and Performance Limits of Adaptive Primal-Dual Networks

no code implementations16 Aug 2014 Zaid J. Towfic, Ali H. Sayed

It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies.

Stochastic Optimization

Dictionary Learning over Distributed Models

no code implementations6 Feb 2014 Jianshu Chen, Zaid J. Towfic, Ali H. Sayed

In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements.

Dictionary Learning

Distributed Policy Evaluation Under Multiple Behavior Strategies

no code implementations30 Dec 2013 Sergio Valcarcel Macua, Jianshu Chen, Santiago Zazo, Ali H. Sayed

We apply diffusion strategies to develop a fully-distributed cooperative reinforcement learning algorithm in which agents in a network communicate only with their immediate neighbors to improve predictions about their environment.

Asynchronous Adaptation and Learning over Networks - Part II: Performance Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

The expressions reveal how the various parameters of the asynchronous behavior influence network performance.

Distributed Optimization

Asynchronous Adaptation and Learning over Networks - Part III: Comparison Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

First, the results establish that the performance of adaptive networks is largely immune to the effect of asynchronous events: the mean and mean-square convergence rates and the asymptotic bias values are not degraded relative to synchronous or centralized implementations.

Asynchronous Adaptation and Learning over Networks --- Part I: Modeling and Stability Analysis

no code implementations19 Dec 2013 Xiaochuan Zhao, Ali H. Sayed

In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks.

Distributed Optimization

Diffusion Adaptation over Networks

no code implementations18 May 2012 Ali H. Sayed

The agents are linked together through a connection topology, and they cooperate with each other through local interactions to solve distributed optimization, estimation, and inference problems in real-time.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.