Similarity Preserving Unsupervised Feature Selection based on Sparse Learning

Various feature selection methods have been recently proposed on different applications to reduce the computational burden of machine learning algorithms as well as the complexity of learned models. Preserving sample similarities and selecting discriminative features are two major factors should be satisfied, especially by unsupervised feature selection methods. This paper aims to propose a novel unsupervised feature selection approach which employs an ℓ 2,1 -norm regularization model to preserve global and local similarities by minimizing an objective function. Cluster analysis is also incorporated in this framework to take the inherent structure of the data into account. The experimental results show the strength of the proposed approach as compared with the earlier well-known methods on a variety of standard datasets.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods