Search Results for author: Shaocong Ma

Found 7 papers, 0 papers with code

Data Sampling Affects the Complexity of Online SGD over Dependent Data

no code implementations31 Mar 2022 Shaocong Ma, Ziyi Chen, Yi Zhou, Kaiyi Ji, Yingbin Liang

Moreover, we show that online SGD with mini-batch sampling can further substantially improve the sample complexity over online SGD with periodic data-subsampling over highly dependent data.

Stochastic Optimization

Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex Minimax Machine Learning

no code implementations22 Dec 2021 Ziyi Chen, Shaocong Ma, Yi Zhou

Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used for model training in various machine learning applications, which aims to solve a nonconvex minimax optimization problem.

BIG-bench Machine Learning

Sample Efficient Stochastic Policy Extragradient Algorithm for Zero-Sum Markov Game

no code implementations ICLR 2022 Ziyi Chen, Shaocong Ma, Yi Zhou

Two-player zero-sum Markov game is a fundamental problem in reinforcement learning and game theory.

How to Improve Sample Complexity of SGD over Highly Dependent Data?

no code implementations29 Sep 2021 Shaocong Ma, Ziyi Chen, Yi Zhou, Kaiyi Ji, Yingbin Liang

Specifically, with a $\phi$-mixing model that captures both exponential and polynomial decay of the data dependence over time, we show that SGD with periodic data-subsampling achieves an improved sample complexity over the standard SGD in the full spectrum of the $\phi$-mixing data dependence.

Stochastic Optimization

Variance-Reduced Off-Policy TDC Learning: Non-Asymptotic Convergence Analysis

no code implementations NeurIPS 2020 Shaocong Ma, Yi Zhou, Shaofeng Zou

In the Markovian setting, our algorithm achieves the state-of-the-art sample complexity $O(\epsilon^{-1} \log {\epsilon}^{-1})$ that is near-optimal.

Understanding the Impact of Model Incoherence on Convergence of Incremental SGD with Random Reshuffle

no code implementations ICML 2020 Shaocong Ma, Yi Zhou

Specifically, minimizer incoherence measures the discrepancy between the global minimizers of a sample loss and those of the total loss and affects the convergence error of SGD with random reshuffle.

Cannot find the paper you are looking for? You can Submit a new open access paper.