Enhanced Principal Component Analysis under A Collaborative-Robust Framework

22 Mar 2021  ·  Rui Zhang, Hongyuan Zhang, Xuelong Li ·

Principal component analysis (PCA) frequently suffers from the disturbance of outliers and thus a spectrum of robust extensions and variations of PCA have been developed. However, existing extensions of PCA treat all samples equally even those with large noise. In this paper, we first introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way. More significantly, under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored. In particular, the negative effects of inactivated samples are alleviated by the robust loss function. Then we furthermore develop an enhanced PCA which adopts a point-wise sigma-loss function that interpolates between L_2,1-norm and squared Frobenius-norm and meanwhile retains the rotational invariance property. Extensive experiments are conducted on occluded datasets from two aspects including reconstructed errors and clustering accuracy. The experimental results prove the superiority and effectiveness of our model.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods