Learning finite-dimensional coding schemes with nonlinear reconstruction maps

23 Dec 2018  ·  Jaeho Lee, Maxim Raginsky ·

This paper generalizes the Maurer--Pontil framework of finite-dimensional lossy coding schemes to the setting where a high-dimensional random vector is mapped to an element of a compact set of latent representations in a lower-dimensional Euclidean space, and the reconstruction map belongs to a given class of nonlinear maps. Under this setup, which encompasses a broad class of unsupervised representation learning problems, we establish a connection to approximate generative modeling under structural constraints using the tools from the theory of optimal transportation. Next, we consider problem of learning a coding scheme on the basis of a finite collection of training samples and present generalization bounds that hold with high probability. We then illustrate the general theory in the setting where the reconstruction maps are implemented by deep neural nets.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here