Paper

Relaxed Leverage Sampling for Low-rank Matrix Completion

We consider the problem of exact recovery of any $m\times n$ matrix of rank $\varrho$ from a small number of observed entries via the standard nuclear norm minimization framework. Such low-rank matrices have degrees of freedom $(m+n)\varrho - \varrho^2$. We show that any arbitrary low-rank matrices can be recovered exactly from a $\Theta\left(((m+n)\varrho - \varrho^2)\log^2(m+n)\right)$ randomly sampled entries, thus matching the lower bound on the required number of entries (in terms of degrees of freedom), with an additional factor of $O(\log^2(m+n))$. To achieve this bound on sample size we observe each entry with probabilities proportional to the sum of corresponding row and column leverage scores, minus their product. We show that this relaxation in sampling probabilities (as opposed to sum of leverage scores in Chen et al, 2014) can give us an $O(\varrho^2\log^2(m+n))$ additive improvement on the (best known) sample size obtained by Chen et al, 2014, for the nuclear norm minimization. Experiments on real data corroborate the theoretical improvement on sample size. Further, exact recovery of $(a)$ incoherent matrices (with restricted leverage scores), and $(b)$ matrices with only one of the row or column spaces to be incoherent, can be performed using our relaxed leverage score sampling, via nuclear norm minimization, without knowing the leverage scores a priori. In such settings also we can achieve improvement on sample size.

Results in Papers With Code
(↓ scroll down to see all results)