Paper

A deep network construction that adapts to intrinsic dimensionality beyond the domain

We study the approximation of two-layer compositions $f(x) = g(\phi(x))$ via deep networks with ReLU activation, where $\phi$ is a geometrically intuitive, dimensionality reducing feature map. We focus on two intuitive and practically relevant choices for $\phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets. We achieve near optimal approximation rates, which depend only on the complexity of the dimensionality reducing map $\phi$ rather than the ambient dimension. Since $\phi$ encapsulates all nonlinear features that are material to the function $f$, this suggests that deep nets are faithful to an intrinsic dimension governed by $f$ rather than the complexity of the domain of $f$. In particular, the prevalent assumption of approximating functions on low-dimensional manifolds can be significantly relaxed using functions of type $f(x) = g(\phi(x))$ with $\phi$ representing an orthogonal projection onto the same manifold.

Results in Papers With Code
(↓ scroll down to see all results)