Search Results for author: Yuejiao Sun

Found 11 papers, 2 papers with code

Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems

no code implementations NeurIPS 2021 Tianyi Chen, Yuejiao Sun, Wotao Yin

By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems.

Bilevel Optimization

Tighter Analysis of Alternating Stochastic Gradient Method for Stochastic Nested Problems

no code implementations25 Jun 2021 Tianyi Chen, Yuejiao Sun, Wotao Yin

By leveraging the hidden smoothness of the problem, this paper presents a tighter analysis of ALSET for stochastic nested problems.

Bilevel Optimization

A Single-Timescale Method for Stochastic Bilevel Optimization

no code implementations9 Feb 2021 Tianyi Chen, Yuejiao Sun, Quan Xiao, Wotao Yin

This paper develops a new optimization method for a class of stochastic bilevel problems that we term Single-Timescale stochAstic BiLevEl optimization (STABLE) method.

Bilevel Optimization Meta-Learning +1

CADA: Communication-Adaptive Distributed Adam

1 code implementation31 Dec 2020 Tianyi Chen, Ziye Guo, Yuejiao Sun, Wotao Yin

This paper proposes an adaptive stochastic gradient descent method for distributed machine learning, which can be viewed as the communication-adaptive counterpart of the celebrated Adam method - justifying its name CADA.

BIG-bench Machine Learning

Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization

no code implementations25 Aug 2020 Tianyi Chen, Yuejiao Sun, Wotao Yin

In particular, we apply Adam to SCSC, and the exhibited rate of convergence matches that of the original Adam on non-compositional stochastic optimization.

Management Meta-Learning +1

VAFL: a Method of Vertical Asynchronous Federated Learning

no code implementations12 Jul 2020 Tianyi Chen, Xiao Jin, Yuejiao Sun, Wotao Yin

Horizontal Federated learning (FL) handles multi-client data that share the same set of features, and vertical FL trains a better predictor that combine all the features from different clients.

Federated Learning

LASG: Lazily Aggregated Stochastic Gradients for Communication-Efficient Distributed Learning

1 code implementation26 Feb 2020 Tianyi Chen, Yuejiao Sun, Wotao Yin

The new algorithms adaptively choose between fresh and stale stochastic gradients and have convergence rates comparable to the original SGD.

Federated Learning

General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme

no code implementations NeurIPS 2019 Tao Sun, Yuejiao Sun, Dongsheng Li, Qing Liao

In this paper, we propose a general proximal incremental aggregated gradient algorithm, which contains various existing algorithms including the basic incremental aggregated gradient method.

Markov Chain Block Coordinate Descent

no code implementations22 Nov 2018 Tao Sun, Yuejiao Sun, Yangyang Xu, Wotao Yin

random and cyclic selections are either infeasible or very expensive.

Distributed Optimization

On Markov Chain Gradient Descent

no code implementations NeurIPS 2018 Tao Sun, Yuejiao Sun, Wotao Yin

This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain.

Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers

no code implementations22 Nov 2017 Yifan Chen, Yuejiao Sun, Wotao Yin

If no sufficient decrease is found, the current point is called an approximate $R$-local minimizer.

Cannot find the paper you are looking for? You can Submit a new open access paper.