Search Results for author: Jing An

Found 6 papers, 1 papers with code

Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks

no code implementations18 Apr 2023 Jing An, Jianfeng Lu

We study the convergence of stochastic gradient descent (SGD) for non-convex objective functions.

valid

Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss

1 code implementation6 Mar 2023 Pierre Bréchet, Katerina Papagiannouli, Jing An, Guido Montúfar

We consider a deep matrix factorization model of covariance matrices trained with the Bures-Wasserstein distance.

Combining resampling and reweighting for faithful stochastic optimization

no code implementations31 May 2021 Jing An, Lexing Ying

When the loss function is a sum of multiple terms, a popular method is the stochastic gradient descent.

Stochastic Optimization

Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients

no code implementations ICLR 2021 Jing An, Lexing Ying, Yuhua Zhu

We consider two commonly-used techniques, resampling and reweighting, that rebalance the proportions of the subgroups to maintain the desired objective function.

Stochastic modified equations for the asynchronous stochastic gradient descent

no code implementations21 May 2018 Jing An, Jianfeng Lu, Lexing Ying

The resulting SME of Langevin type extracts more information about the ASGD dynamics and elucidates the relationship between different types of stochastic gradient algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.