Search Results for author: Gesualdo Scutari

Found 12 papers, 2 papers with code

Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity

no code implementations30 May 2022 Dmitry Kovalev, Aleksandr Beznosikov, Ekaterina Borodich, Alexander Gasnikov, Gesualdo Scutari

Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.

Distributed Optimization

High-Dimensional Inference over Networks: Linear Convergence and Statistical Guarantees

no code implementations21 Jan 2022 Ying Sun, Marie Maros, Gesualdo Scutari, Guang Cheng

Our theory shows that, under standard notions of restricted strong convexity and smoothness of the loss functions, suitable conditions on the network connectivity and algorithm tuning, the distributed algorithm converges globally at a {\it linear} rate to an estimate that is within the centralized {\it statistical precision} of the model, $O(s\log d/N)$.

Vocal Bursts Intensity Prediction

Distributed Saddle-Point Problems Under Data Similarity

no code implementations NeurIPS 2021 Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov

We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type--master/workers (thus centralized) architectures and mesh (thus decentralized) networks.

Distributed Sparse Regression via Penalization

no code implementations12 Nov 2021 Yao Ji, Gesualdo Scutari, Ying Sun, Harsha Honnappa

First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates.

regression

Acceleration in Distributed Optimization under Similarity

no code implementations24 Oct 2021 Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov

In order to reduce the number of communications to reach a solution accuracy, we proposed a {\it preconditioned, accelerated} distributed method.

Distributed Optimization

Finite-Bit Quantization For Distributed Algorithms With Linear Convergence

no code implementations23 Jul 2021 Nicolò Michelusi, Gesualdo Scutari, Chang-Shen Lee

This paper studies distributed algorithms for (strongly convex) composite optimization problems over mesh networks, subject to quantized communications.

Quantization

Distributed Saddle-Point Problems Under Similarity

1 code implementation22 Jul 2021 Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov

We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized) networks.

Kernel Bi-Linear Modeling for Reconstructing Data on Manifolds: The Dynamic-MRI Case

no code implementations27 Feb 2020 Gaurav N. Shetty, Konstantinos Slavakis, Ukash Nakarmi, Gesualdo Scutari, Leslie Ying

This paper establishes a kernel-based framework for reconstructing data on manifolds, tailored to fit the dynamic-(d)MRI-data recovery problem.

Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

1 code implementation23 Oct 2019 Jinming Xu, Ye Tian, Ying Sun, Gesualdo Scutari

This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications.

Distributed Optimization

Bi-Linear Modeling of Data Manifolds for Dynamic-MRI Recovery

no code implementations27 Dec 2018 Gaurav N. Shetty, Konstantinos Slavakis, Abhishek Bose, Ukash Nakarmi, Gesualdo Scutari, Leslie Ying

This paper puts forth a novel bi-linear modeling framework for data recovery via manifold-learning and sparse-approximation arguments and considers its application to dynamic magnetic-resonance imaging (dMRI).

Dimensionality Reduction

Decentralized Dictionary Learning Over Time-Varying Digraphs

no code implementations17 Aug 2018 Amir Daneshmand, Ying Sun, Gesualdo Scutari, Francisco Facchinei, Brian M. Sadler

This paper studies Dictionary Learning problems wherein the learning task is distributed over a multi-agent network, modeled as a time-varying directed graph.

Dictionary Learning

Distributed Dictionary Learning

no code implementations21 Dec 2016 Amir Daneshmand, Gesualdo Scutari, Francisco Facchinei

The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity.

Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.