no code implementations • 25 Jan 2023 • Xiao Li, Zhihui Zhu, Qiuwei Li, Kai Liu
The symmetric Nonnegative Matrix Factorization (NMF), a special but important class of the general NMF, has found numerous applications in data analysis such as various clustering tasks.
1 code implementation • 2 Jun 2021 • Daniel Mckenzie, Howard Heaton, Qiuwei Li, Samy Wu Fung, Stanley Osher, Wotao Yin
Systems of competing agents can often be modeled as games.
2 code implementations • 23 Mar 2021 • Samy Wu Fung, Howard Heaton, Qiuwei Li, Daniel Mckenzie, Stanley Osher, Wotao Yin
Unlike traditional networks, implicit networks solve a fixed point equation to compute inferences.
1 code implementation • NeurIPS 2019 • Zhihui Zhu, Qiuwei Li, Xinshuo Yang, Gongguo Tang, Michael B. Wakin
Low-rank matrix factorization is a problem of broad importance, owing to the ubiquity of low-rank models in machine learning contexts.
no code implementations • 22 Apr 2019 • Qiuwei Li, Zhihui Zhu, Gongguo Tang, Michael B. Wakin
Therefore, this work not only develops guaranteed optimization methods for non-Lipschitz smooth problems but also solves an open problem of showing the second-order convergence guarantees for these alternating minimization methods.
1 code implementation • 16 Mar 2019 • Kai Liu, Qiuwei Li, Hua Wang, Gongguo Tang
However, most of the studies on PCA aim to minimize the loss after projection, which usually measures the Euclidean distance, though in some fields, angle distance is known to be more important and critical for analysis.
no code implementations • NeurIPS 2018 • Zhihui Zhu, Xiao Li, Kai Liu, Qiuwei Li
Symmetric nonnegative matrix factorization (NMF), a special but important class of the general NMF, is demonstrated to be useful for data analysis and in particular for various clustering tasks.
no code implementations • 7 Nov 2018 • Zhihui Zhu, Qiuwei Li, Xinshuo Yang, Gongguo Tang, Michael B. Wakin
We study the convergence of a variant of distributed gradient descent (DGD) on a distributed low-rank matrix approximation problem wherein some optimization variables are used for consensus (as in classical DGD) and some optimization variables appear only locally at a single node in the network.
no code implementations • 19 Sep 2017 • Tao Hong, Xiao Li, Zhihui Zhu, Qiuwei Li
We consider designing a robust structured sparse sensing matrix consisting of a sparse matrix with a few non-zero entries per row and a dense base matrix for capturing signals efficiently We design the robust structured sparse sensing matrix through minimizing the distance between the Gram matrix of the equivalent dictionary and the target Gram of matrix holding small mutual coherence.
no code implementations • 5 Apr 2017 • Qiuwei Li, Zhihui Zhu, Gongguo Tang
In spite of the nonconvexity of the factored formulation, we prove that when the convex loss function $f(X)$ is $(2r, 4r)$-restricted well-conditioned, each critical point of the factored problem either corresponds to the optimal solution $X^\star$ of the original convex optimization or is a strict saddle point where the Hessian matrix has a strictly negative eigenvalue.