Survey: Geometric Foundations of Data Reduction

16 Aug 2020  ·  Ce Ju ·

This survey is written in summer, 2016. The purpose of this survey is to briefly introduce nonlinear dimensionality reduction (NLDR) in data reduction. The first two NLDR were respectively published in Science in 2000 in which they solve the similar reduction problem of high-dimensional data endowed with the intrinsic nonlinear structure. The intrinsic nonlinear structure is always interpreted as a concept in manifolds from geometry and topology in theoretical mathematics by computer scientists and theoretical physicists. In 2001, the concept of Manifold Learning first appears as an NLDR method called Laplacian Eigenmaps. In a typical manifold learning setup, the data set, also called the observation set, is distributed on or near a low dimensional manifold M embedded in RD, which yields that each observation has a D-dimensional representation. The goal of manifold learning is to reduce these observations as a compact lower-dimensional representation based on the geometric information. The reduction procedure is called the spectral manifold learning. In this paper, we derive each spectral manifold learning with the matrix and operator representation, and we then discuss the convergence behavior of each method in a geometric uniform language. Hence, the survey is named Geometric Foundations of Data Reduction.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here