no code implementations • 7 Feb 2022 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
This paper proposes AB-SAGA, a first-order distributed stochastic optimization method to minimize a finite-sum of smooth and strongly convex functions distributed over an arbitrary directed graph.
no code implementations • 12 Feb 2021 • Ran Xin, Usman A. Khan, Soummya Kar
This paper considers decentralized stochastic optimization over a network of $n$ nodes, where each node possesses a smooth non-convex local cost function and the goal of the networked nodes is to find an $\epsilon$-accurate first-order stationary point of the sum of the local costs.
no code implementations • 7 Nov 2020 • Ran Xin, Usman A. Khan, Soummya Kar
For general smooth non-convex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance where it outperforms the existing approaches and achieves a network topology-independent iteration complexity respectively.
no code implementations • 12 Sep 2020 • Ran Xin, Shi Pu, Angelia Nedić, Usman A. Khan
Decentralized optimization to minimize a finite sum of functions over a network of nodes has been a significant focus within control and signal processing research due to its natural relevance to optimal control and signal estimation problems.
no code implementations • 17 Aug 2020 • Ran Xin, Usman A. Khan, Soummya Kar
We show that GT-SARAH, with appropriate algorithmic parameters, finds an $\epsilon$-accurate first-order stationary point with $O\big(\max\big\{N^{\frac{1}{2}}, n(1-\lambda)^{-2}, n^{\frac{2}{3}}m^{\frac{1}{3}}(1-\lambda)^{-1}\big\}L\epsilon^{-2}\big)$ gradient complexity, where ${(1-\lambda)\in(0, 1]}$ is the spectral gap of the network weight matrix and $L$ is the smoothness parameter of the cost functions.
1 code implementation • 13 Aug 2020 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
In this paper, we propose Push-SAGA, a decentralized stochastic first-order method for finite-sum minimization over a directed network of nodes.
no code implementations • 10 Aug 2020 • Ran Xin, Usman A. Khan, Soummya Kar
In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes.
2 code implementations • 15 May 2020 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes.
no code implementations • 13 Feb 2020 • Ran Xin, Soummya Kar, Usman A. Khan
Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data is distributed over a network of nodes and raw data sharing is not permitted due to privacy and/or resource constraints.
no code implementations • 8 Oct 2019 • Ran Xin, Usman A. Khan, Soummya Kar
Decentralized stochastic optimization has recently benefited from gradient tracking methods \cite{DSGT_Pu, DSGT_Xin} providing efficient solutions for large-scale empirical risk minimization problems.
no code implementations • 25 Sep 2019 • Ran Xin, Usman A. Khan, Soummya Kar
In this paper, we study decentralized empirical risk minimization problems, where the goal to minimize a finite-sum of smooth and strongly-convex functions available over a network of nodes.
Optimization and Control
no code implementations • 23 Jul 2019 • Ran Xin, Soummya Kar, Usman A. Khan
Decentralized solutions to finite-sum minimization are of significant importance in many signal processing, control, and machine learning applications.
no code implementations • 18 Mar 2019 • Ran Xin, Anit Kumar Sahu, Usman A. Khan, Soummya Kar
In this paper, we study distributed stochastic optimization to minimize a sum of smooth and strongly-convex local cost functions over a network of agents, communicating over a strongly-connected graph.
no code implementations • 21 Jan 2019 • Ran Xin, Dusan Jakovetic, Usman A. Khan
In this letter, we introduce a distributed Nesterov method, termed as $\mathcal{ABN}$, that does not require doubly-stochastic weight matrices.