Search Results for author: Vincenzo Matta

Found 11 papers, 3 papers with code

Compressed Regression over Adaptive Networks

no code implementations7 Apr 2023 Marco Carpentiero, Vincenzo Matta, Ali H. Sayed

In this work we derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.

regression

Memory-Aware Social Learning under Partial Information Sharing

no code implementations25 Jan 2023 Michele Cirillo, Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

We devise a novel learning strategy where each agent forms a valid belief by completing the partial beliefs received from its neighbors.

valid

Distributed Bayesian Learning of Dynamic States

no code implementations5 Dec 2022 Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

This work studies networked agents cooperating to track a dynamical state of nature under partial information.

Quantization for decentralized learning under subspace constraints

no code implementations16 Sep 2022 Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed

In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.

Quantization

Learning from Heterogeneous Data Based on Social Interactions over Graphs

1 code implementation17 Dec 2021 Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

In the proposed social machine learning (SML) strategy, two phases are present: in the training phase, classifiers are independently trained to generate a belief over a set of hypotheses using a finite number of training samples; in the prediction phase, classifiers evaluate streaming unlabeled observations and share their instantaneous beliefs with neighboring classifiers.

Decision Making

Distributed Adaptive Learning Under Communication Constraints

no code implementations3 Dec 2021 Marco Carpentiero, Vincenzo Matta, Ali H. Sayed

We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors.

Network Classifiers Based on Social Learning

no code implementations23 Oct 2020 Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed

Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.

Partial Information Sharing over Social Learning Networks

1 code implementation24 Jun 2020 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

Instead of sharing the entirety of their beliefs, this work considers the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draws conditions under which this policy does not affect truth learning.

Graph Learning Under Partial Observability

no code implementations18 Dec 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

Many optimization, inference and learning tasks can be accomplished efficiently by means of decentralized processing algorithms where the network topology (i. e., the graph) plays a critical role in enabling the interactions among neighboring nodes.

Distributed Optimization Graph Learning

Social Learning with Partial Information Sharing

1 code implementation30 Oct 2019 Virginia Bordignon, Vincenzo Matta, Ali H. Sayed

This work studies the learning abilities of agents sharing partial beliefs over social networks.

Signal Processing Multiagent Systems

Graph Learning over Partially Observed Diffusion Networks: Role of Degree Concentration

no code implementations5 Apr 2019 Vincenzo Matta, Augusto Santos, Ali H. Sayed

This claim is proved for three matrix estimators: i) the Granger estimator that adapts to the partial observability setting the solution that is exact under full observability ; ii) the one-lag correlation matrix; and iii) the residual estimator based on the difference between two consecutive time samples.

Clustering Graph Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.