Sparse Bayesian approach for metric learning in latent space

This paper presents a new and efficient approach for metric learning in latent space. Our method discovers an optimal mapping from the feature space to a latent space that shrinks the distance between similar data items and also increases the distance between dissimilar ones. The proposed approach is based on a Bayesian variational framework which iteratively finds the optimal posterior distribution of parameters and hyperparameters of the model. Advantages of the proposed method to similar work are 1) Learning the noise of the latent variables on the low-dimensional manifold to find a more effective transformation. 2) Automatically finding the dimension of latent space and sparsification of the solution which prevents the overfitting problem. 3) Unlike Mahalanobis metric learning, the proposed algorithm roughly scales linearly to the dimension of data. Also, the present work is extended for learning in the feature space induced by an RKHS kernel. The proposed method is evaluated on small and large datasets coming from real applications such as network intrusion detection, face recognition, handwritten digits, letter recognition, and hyperspectral image classification. The results show that our method outperforms related representative and state-of-the-art methods in many small and large datasets.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here