Guaranteed Matrix Completion Under Multiple Linear Transformations

Low-rank matrix completion (LRMC) is a classical model in both computer vision (CV) and machine learning, and has been successfully applied to various real applications. In the recent CV tasks, the completion is usually employed on the variants of data, such as "non-local" or filtered, rather than their original forms. This fact makes that the theoretical analysis of the conventional LRMC is no longer suitable in these applications. To tackle this problem, we propose a more general framework for LRMC, in which the linear transformations of the data are taken into account. We rigorously prove the identifiability of the proposed model and show an upper bound of the reconstruction error. Furthermore, we derive an efficient completion algorithm by using augmented Lagrangian multipliers and the sketching trick. In the experiments, we apply the proposed method to the classical image inpainting problem and achieve the state-of-the-art results.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here