no code implementations • 30 May 2022 • Dmitry Kovalev, Aleksandr Beznosikov, Ekaterina Borodich, Alexander Gasnikov, Gesualdo Scutari
Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.
no code implementations • 21 Jan 2022 • Ying Sun, Marie Maros, Gesualdo Scutari, Guang Cheng
Our theory shows that, under standard notions of restricted strong convexity and smoothness of the loss functions, suitable conditions on the network connectivity and algorithm tuning, the distributed algorithm converges globally at a {\it linear} rate to an estimate that is within the centralized {\it statistical precision} of the model, $O(s\log d/N)$.
no code implementations • NeurIPS 2021 • Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type--master/workers (thus centralized) architectures and mesh (thus decentralized) networks.
no code implementations • 12 Nov 2021 • Yao Ji, Gesualdo Scutari, Ying Sun, Harsha Honnappa
First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates.
no code implementations • 24 Oct 2021 • Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov
In order to reduce the number of communications to reach a solution accuracy, we proposed a {\it preconditioned, accelerated} distributed method.
no code implementations • 23 Jul 2021 • Nicolò Michelusi, Gesualdo Scutari, Chang-Shen Lee
This paper studies distributed algorithms for (strongly convex) composite optimization problems over mesh networks, subject to quantized communications.
1 code implementation • 22 Jul 2021 • Aleksandr Beznosikov, Gesualdo Scutari, Alexander Rogozin, Alexander Gasnikov
We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized) networks.
no code implementations • 27 Feb 2020 • Gaurav N. Shetty, Konstantinos Slavakis, Ukash Nakarmi, Gesualdo Scutari, Leslie Ying
This paper establishes a kernel-based framework for reconstructing data on manifolds, tailored to fit the dynamic-(d)MRI-data recovery problem.
1 code implementation • 23 Oct 2019 • Jinming Xu, Ye Tian, Ying Sun, Gesualdo Scutari
This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications.
no code implementations • 27 Dec 2018 • Gaurav N. Shetty, Konstantinos Slavakis, Abhishek Bose, Ukash Nakarmi, Gesualdo Scutari, Leslie Ying
This paper puts forth a novel bi-linear modeling framework for data recovery via manifold-learning and sparse-approximation arguments and considers its application to dynamic magnetic-resonance imaging (dMRI).
no code implementations • 17 Aug 2018 • Amir Daneshmand, Ying Sun, Gesualdo Scutari, Francisco Facchinei, Brian M. Sadler
This paper studies Dictionary Learning problems wherein the learning task is distributed over a multi-agent network, modeled as a time-varying directed graph.
no code implementations • 21 Dec 2016 • Amir Daneshmand, Gesualdo Scutari, Francisco Facchinei
The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity.