Search Results for author: Ran Xin

Found 14 papers, 2 papers with code

Variance reduced stochastic optimization over directed graphs with row and column stochastic weights

no code implementations7 Feb 2022 Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan

This paper proposes AB-SAGA, a first-order distributed stochastic optimization method to minimize a finite-sum of smooth and strongly convex functions distributed over an arbitrary directed graph.

Stochastic Optimization

A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization

no code implementations12 Feb 2021 Ran Xin, Usman A. Khan, Soummya Kar

This paper considers decentralized stochastic optimization over a network of $n$ nodes, where each node possesses a smooth non-convex local cost function and the goal of the networked nodes is to find an $\epsilon$-accurate first-order stationary point of the sum of the local costs.

Stochastic Optimization

A fast randomized incremental gradient method for decentralized non-convex optimization

no code implementations7 Nov 2020 Ran Xin, Usman A. Khan, Soummya Kar

For general smooth non-convex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance where it outperforms the existing approaches and achieves a network topology-independent iteration complexity respectively.

A general framework for decentralized optimization with first-order methods

no code implementations12 Sep 2020 Ran Xin, Shi Pu, Angelia Nedić, Usman A. Khan

Decentralized optimization to minimize a finite sum of functions over a network of nodes has been a significant focus within control and signal processing research due to its natural relevance to optimal control and signal estimation problems.

BIG-bench Machine Learning

Fast decentralized non-convex finite-sum optimization with recursive variance reduction

no code implementations17 Aug 2020 Ran Xin, Usman A. Khan, Soummya Kar

We show that GT-SARAH, with appropriate algorithmic parameters, finds an $\epsilon$-accurate first-order stationary point with $O\big(\max\big\{N^{\frac{1}{2}}, n(1-\lambda)^{-2}, n^{\frac{2}{3}}m^{\frac{1}{3}}(1-\lambda)^{-1}\big\}L\epsilon^{-2}\big)$ gradient complexity, where ${(1-\lambda)\in(0, 1]}$ is the spectral gap of the network weight matrix and $L$ is the smoothness parameter of the cost functions.

Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs

1 code implementation13 Aug 2020 Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan

In this paper, we propose Push-SAGA, a decentralized stochastic first-order method for finite-sum minimization over a directed network of nodes.

An improved convergence analysis for decentralized online stochastic non-convex optimization

no code implementations10 Aug 2020 Ran Xin, Usman A. Khan, Soummya Kar

In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes.

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

2 code implementations15 May 2020 Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan

In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes.

Stochastic Optimization

Gradient tracking and variance reduction for decentralized optimization and machine learning

no code implementations13 Feb 2020 Ran Xin, Soummya Kar, Usman A. Khan

Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data is distributed over a network of nodes and raw data sharing is not permitted due to privacy and/or resource constraints.

BIG-bench Machine Learning

Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking -- Part II: GT-SVRG

no code implementations8 Oct 2019 Ran Xin, Usman A. Khan, Soummya Kar

Decentralized stochastic optimization has recently benefited from gradient tracking methods \cite{DSGT_Pu, DSGT_Xin} providing efficient solutions for large-scale empirical risk minimization problems.

Stochastic Optimization

Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking

no code implementations25 Sep 2019 Ran Xin, Usman A. Khan, Soummya Kar

In this paper, we study decentralized empirical risk minimization problems, where the goal to minimize a finite-sum of smooth and strongly-convex functions available over a network of nodes.

Optimization and Control

An introduction to decentralized stochastic optimization with gradient tracking

no code implementations23 Jul 2019 Ran Xin, Soummya Kar, Usman A. Khan

Decentralized solutions to finite-sum minimization are of significant importance in many signal processing, control, and machine learning applications.

BIG-bench Machine Learning Stochastic Optimization

Distributed stochastic optimization with gradient tracking over strongly-connected networks

no code implementations18 Mar 2019 Ran Xin, Anit Kumar Sahu, Usman A. Khan, Soummya Kar

In this paper, we study distributed stochastic optimization to minimize a sum of smooth and strongly-convex local cost functions over a network of agents, communicating over a strongly-connected graph.

Stochastic Optimization

Distributed Nesterov gradient methods over arbitrary graphs

no code implementations21 Jan 2019 Ran Xin, Dusan Jakovetic, Usman A. Khan

In this letter, we introduce a distributed Nesterov method, termed as $\mathcal{ABN}$, that does not require doubly-stochastic weight matrices.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.