Optimized Kernel-based Projection Space of Riemannian Manifolds

10 Feb 2016  ·  Azadeh Alavi, Vishal M. Patel, Rama Chellappa ·

It is proven that encoding images and videos through Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, can lead to increased classification performance. Taking into account manifold geometry is typically done via embedding the manifolds in tangent spaces, or Reproducing Kernel Hilbert Spaces (RKHS). Recently, it was shown that embedding such manifolds into a Random Projection Spaces (RPS), rather than RKHS or tangent space, leads to higher classification and clustering performance. However, based on structure and dimensionality of the randomly generated hyperplanes, the classification performance over RPS may vary significantly. In addition, fine-tuning RPS is data expensive (as it requires validation-data), time consuming, and resource demanding. In this paper, we introduce an approach to learn an optimized kernel-based projection (with fixed dimensionality), by employing the concept of subspace clustering. As such, we encode the association of data points to the underlying subspace of each point, to generate meaningful hyperplanes. Further, we adopt the concept of dictionary learning and sparse coding, and discriminative analysis, for the optimized kernel-based projection space (OPS) on SPD manifolds. We validate our algorithm on several classification tasks. The experiment results also demonstrate that the proposed method outperforms state-of-the-art methods on such manifolds.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here