Search Results for author: Zijian Liu

Found 11 papers, 0 papers with code

On the Last-Iterate Convergence of Shuffling Gradient Methods

no code implementations12 Mar 2024 Zijian Liu, Zhengyuan Zhou

Shuffling gradient methods, which are also known as stochastic gradient descent (SGD) without replacement, are widely implemented in practice, particularly including three popular algorithms: Random Reshuffle (RR), Shuffle Once (SO), and Incremental Gradient (IG).

Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods

no code implementations13 Dec 2023 Zijian Liu, Zhengyuan Zhou

For Lipschitz convex functions, different works have established the optimal $O(\log(1/\delta)\log T/\sqrt{T})$ or $O(\sqrt{\log(1/\delta)/T})$ high-probability convergence rates for the final iterate, where $T$ is the time horizon and $\delta$ is the failure probability.

STDA-Meta: A Meta-Learning Framework for Few-Shot Traffic Prediction

no code implementations31 Oct 2023 Maoxiang Sun, Weilong Ding, Tianpu Zhang, Zijian Liu, Mengda Xing

As the development of cities, traffic congestion becomes an increasingly pressing issue, and traffic prediction is a classic method to relieve that issue.

Domain Adaptation Few-Shot Learning +3

Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises: High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation

no code implementations22 Mar 2023 Zijian Liu, Zhengyuan Zhou

Recently, several studies consider the stochastic optimization problem but in a heavy-tailed noise regime, i. e., the difference between the stochastic gradient and the true gradient is assumed to have a finite $p$-th moment (say being upper bounded by $\sigma^{p}$ for some $\sigma\geq0$) where $p\in(1, 2]$, which not only generalizes the traditional finite variance assumption ($p=2$) but also has been observed in practice for several different tasks.

Stochastic Optimization

High Probability Convergence of Stochastic Gradient Methods

no code implementations28 Feb 2023 Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy Lê Nguyen

Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution.

Vocal Bursts Intensity Prediction

Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise

no code implementations14 Feb 2023 Zijian Liu, Jiawei Zhang, Zhengyuan Zhou

For this class of problems, we propose the first variance-reduced accelerated algorithm and establish that it guarantees a high-probability convergence rate of $O(\log(T/\delta)T^{\frac{1-p}{2p-1}})$ under a mild condition, which is faster than $\Omega(T^{\frac{1-p}{3p-2}})$.

Stochastic Optimization

Near-Optimal Non-Convex Stochastic Optimization under Generalized Smoothness

no code implementations13 Feb 2023 Zijian Liu, Srikanth Jagabathula, Zhengyuan Zhou

Two recent works established the $O(\epsilon^{-3})$ sample complexity to obtain an $O(\epsilon)$-stationary point.

Stochastic Optimization

META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for Unbounded Functions

no code implementations29 Sep 2022 Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy L. Nguyen

There, STORM utilizes recursive momentum to achieve the VR effect and is then later made fully adaptive in STORM+ [Levy et al., '21], where full-adaptivity removes the requirement for obtaining certain problem-specific parameters such as the smoothness of the objective and bounds on the variance and norm of the stochastic gradients in order to set the step size.

Stochastic Optimization

On the Convergence of AdaGrad(Norm) on $\R^{d}$: Beyond Convexity, Non-Asymptotic Rate and Acceleration

no code implementations29 Sep 2022 Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works.

Adaptive Accelerated (Extra-)Gradient Methods with Variance Reduction

no code implementations28 Jan 2022 Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen

To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG).

Fractional order graph neural network

no code implementations5 Jan 2020 Zijian Liu, Chunbo Luo, Shuai Li, Peng Ren, Geyong Min

This paper proposes fractional order graph neural networks (FGNNs), optimized by the approximation strategy to address the challenges of local optimum of classic and fractional graph neural networks which are specialised at aggregating information from the feature and adjacent matrices of connected nodes and their neighbours to solve learning tasks on non-Euclidean data such as graphs.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.