Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity

7 Jun 2017  ·  Florian Schäfer, T. J. Sullivan, Houman Owhadi ·

Dense kernel matrices $\Theta \in \mathbb{R}^{N \times N}$ obtained from point evaluations of a covariance function $G$ at locations $\{ x_{i} \}_{1 \leq i \leq N}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset $S \subset \{ 1 , \dots , N \} \times \{ 1 , \dots , N \}$, with $\# S = \mathcal{O} ( N \log (N) \log^{d} ( N /\epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of $\Theta_{i,j} \mathbb{1}_{( i, j ) \in S}$ is an $\epsilon$-approximation of $\Theta$. This block-factorisation can provably be obtained in complexity $\mathcal{O} ( N \log( N ) \log^{d}( N /\epsilon) )$ in space and $\mathcal{O} ( N \log^{2}( N ) \log^{2d}( N /\epsilon) )$ in time. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a solver for elliptic PDE with complexity $\mathcal{O} ( N \log( N ) \log^{d}( N /\epsilon) )$ in space and $\mathcal{O} ( N \log^{2}( N ) \log^{2d}( N /\epsilon) )$ in time.

PDF Abstract

Categories


Numerical Analysis Computational Complexity Data Structures and Algorithms Probability 65F30, 42C40, 65F50, 65N55, 65N75, 60G42, 68Q25, 68W40