Random Consensus Robust PCA

This paper presents r2pca, a random con- sensus method for robust principal compo- nent analysis. r2pca takes ransac’s princi- ple of using as little data as possible one step further. It iteratively selects small subsets of the data to identify pieces of the princi- pal components, to then stitch them together. We show that if the principal components are in general position and the errors are su?- ciently sparse, r2pca will exactly recover the principal components with probability 1, in lieu of assumptions on coherence or the dis- tribution of the sparse errors, and even un- der adversarial settings. r2pca enjoys many advantages: it works well under noise, its computational complexity scales linearly in the ambient dimension, it is easily paralleliz- able, and due to its low sample complexity, it can be used in settings where data is so large it cannot even be stored in memory. We complement our theoretical findings with synthetic and real data experiments show- ing that r2pca outperforms state-of-the-art methods in a broad range of settings.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here