no code implementations • 3 Oct 2023 • Nuoya Xiong, Lijun Ding, Simon S. Du
This linear convergence result in the over-parameterization case is especially significant because one can apply the asymmetric parameterization to the symmetric setting to speed up from $\Omega (1/T^2)$ to linear convergence.
no code implementations • 25 Jun 2023 • Jun Song, Niao He, Lijun Ding, Chaoyue Zhao
Trust-region methods based on Kullback-Leibler divergence are pervasively used to stabilize policy optimization in reinforcement learning.
no code implementations • 21 Sep 2022 • Lijun Ding, Zhen Qin, Liwei Jiang, Jinxin Zhou, Zhihui Zhu
In this paper, we study the problem of recovering a low-rank matrix from a number of noisy random linear measurements.
no code implementations • 7 Mar 2022 • Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel, Zaid Harchaoui
Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance.
no code implementations • 6 Mar 2022 • Liwei Jiang, Yudong Chen, Lijun Ding
We study the asymmetric matrix factorization problem under a natural nonconvex formulation with arbitrary overparametrization.
no code implementations • NeurIPS 2021 • Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu
We study the robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank.
1 code implementation • 1 Jan 2021 • Chengrun Yang, Lijun Ding, Ziyang Wu, Madeleine Udell
Tensors are widely used to represent multiway arrays of data.
no code implementations • 7 Dec 2020 • Jicong Fan, Lijun Ding, Chengrun Yang, Zhao Zhang, Madeleine Udell
The theorems show that a relatively sharper regularizer leads to a tighter error bound, which is consistent with our numerical results.
no code implementations • 31 Aug 2020 • Lijun Ding, Yuqian Zhang, Yudong Chen
Existing results for low-rank matrix recovery largely focus on quadratic loss, which enjoys favorable properties such as restricted strong convexity/smoothness (RSC/RSM) and well conditioning over all low rank matrices.
no code implementations • 29 Jun 2020 • Lijun Ding, Jicong Fan, Madeleine Udell
This paper proposes a new variant of Frank-Wolfe (FW), called $k$FW.
no code implementations • 25 Feb 2020 • Lijun Ding, Madeleine Udell
It is more challenging to show that an approximate solution to the SDP formulated with noisy problem data acceptably solves the original problem; arguments are usually ad hoc for each problem setting, and can be complex.
no code implementations • NeurIPS 2019 • Jicong Fan, Lijun Ding, Yudong Chen, Madeleine Udell
Compared to the max norm and the factored formulation of the nuclear norm, factor group-sparse regularizers are more efficient, accurate, and robust to the initial guess of rank.
no code implementations • 11 Nov 2019 • Lijun Ding, Benjamin Grimmer
In this paper, we show that the bundle method can be applied to solve semidefinite programming problems with a low rank solution without ever constructing a full matrix.
no code implementations • 22 Apr 2019 • Vasileios Charisopoulos, Yudong Chen, Damek Davis, Mateo Díaz, Lijun Ding, Dmitriy Drusvyatskiy
The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science.
no code implementations • 9 Feb 2019 • Lijun Ding, Alp Yurtsever, Volkan Cevher, Joel A. Tropp, Madeleine Udell
This paper develops a new storage-optimal algorithm that provably solves generic semidefinite programs (SDPs) in standard form.
no code implementations • 15 Aug 2018 • Lijun Ding, Madeleine Udell
We introduce a few variants on Frank-Wolfe style algorithms suitable for large scale optimization.
no code implementations • 20 Mar 2018 • Lijun Ding, Yudong Chen
In this paper, we introduce a powerful technique based on Leave-one-out analysis to the study of low-rank matrix completion problems.