Search Results for author: Lijun Ding

Found 17 papers, 1 papers with code

How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing: The Curses of Symmetry and Initialization

no code implementations3 Oct 2023 Nuoya Xiong, Lijun Ding, Simon S. Du

This linear convergence result in the over-parameterization case is especially significant because one can apply the asymmetric parameterization to the symmetric setting to speed up from $\Omega (1/T^2)$ to linear convergence.

Provably Convergent Policy Optimization via Metric-aware Trust Region Methods

no code implementations25 Jun 2023 Jun Song, Niao He, Lijun Ding, Chaoyue Zhao

Trust-region methods based on Kullback-Leibler divergence are pervasively used to stabilize policy optimization in reinforcement learning.

Continuous Control Policy Gradient Methods

A Validation Approach to Over-parameterized Matrix and Image Recovery

no code implementations21 Sep 2022 Lijun Ding, Zhen Qin, Liwei Jiang, Jinxin Zhou, Zhihui Zhu

In this paper, we study the problem of recovering a low-rank matrix from a number of noisy random linear measurements.

Image Restoration

Flat minima generalize for low-rank matrix recovery

no code implementations7 Mar 2022 Lijun Ding, Dmitriy Drusvyatskiy, Maryam Fazel, Zaid Harchaoui

Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance.

Matrix Completion

Algorithmic Regularization in Model-free Overparametrized Asymmetric Matrix Factorization

no code implementations6 Mar 2022 Liwei Jiang, Yudong Chen, Lijun Ding

We study the asymmetric matrix factorization problem under a natural nonconvex formulation with arbitrary overparametrization.

Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery

no code implementations NeurIPS 2021 Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu

We study the robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank.

Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization for Low-Rank Tensor Completion and Tensor Robust Principal Component Analysis

no code implementations7 Dec 2020 Jicong Fan, Lijun Ding, Chengrun Yang, Zhao Zhang, Madeleine Udell

The theorems show that a relatively sharper regularizer leads to a tighter error bound, which is consistent with our numerical results.

Low-rank matrix recovery with non-quadratic loss: projected gradient method and regularity projection oracle

no code implementations31 Aug 2020 Lijun Ding, Yuqian Zhang, Yudong Chen

Existing results for low-rank matrix recovery largely focus on quadratic loss, which enjoys favorable properties such as restricted strong convexity/smoothness (RSC/RSM) and well conditioning over all low rank matrices.

Matrix Completion

$k$FW: A Frank-Wolfe style algorithm with stronger subproblem oracles

no code implementations29 Jun 2020 Lijun Ding, Jicong Fan, Madeleine Udell

This paper proposes a new variant of Frank-Wolfe (FW), called $k$FW.

On the simplicity and conditioning of low rank semidefinite programs

no code implementations25 Feb 2020 Lijun Ding, Madeleine Udell

It is more challenging to show that an approximate solution to the SDP formulated with noisy problem data acceptably solves the original problem; arguments are usually ad hoc for each problem setting, and can be complex.

Matrix Completion Stochastic Block Model

Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery

no code implementations NeurIPS 2019 Jicong Fan, Lijun Ding, Yudong Chen, Madeleine Udell

Compared to the max norm and the factored formulation of the nuclear norm, factor group-sparse regularizers are more efficient, accurate, and robust to the initial guess of rank.

Low-Rank Matrix Completion

Bundle Method Sketching for Low Rank Semidefinite Programming

no code implementations11 Nov 2019 Lijun Ding, Benjamin Grimmer

In this paper, we show that the bundle method can be applied to solve semidefinite programming problems with a low rank solution without ever constructing a full matrix.

An Optimal-Storage Approach to Semidefinite Programming using Approximate Complementarity

no code implementations9 Feb 2019 Lijun Ding, Alp Yurtsever, Volkan Cevher, Joel A. Tropp, Madeleine Udell

This paper develops a new storage-optimal algorithm that provably solves generic semidefinite programs (SDPs) in standard form.

Frank-Wolfe Style Algorithms for Large Scale Optimization

no code implementations15 Aug 2018 Lijun Ding, Madeleine Udell

We introduce a few variants on Frank-Wolfe style algorithms suitable for large scale optimization.

Leave-one-out Approach for Matrix Completion: Primal and Dual Analysis

no code implementations20 Mar 2018 Lijun Ding, Yudong Chen

In this paper, we introduce a powerful technique based on Leave-one-out analysis to the study of low-rank matrix completion problems.

Low-Rank Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.