Neural Networks as Inter-Domain Inducing Points

Equivalences between infinite neural networks and Gaussian processes have been established for explaining the functional prior and training dynamics of deep learning models. In this paper we cast the hidden units of finite-width neural networks as the inter-domain inducing points of a kernel, then a one-hidden-layer network becomes a kernel regression model. For dot-product kernels on both $R^d$ and $S^{d−1}$, we derive the kernel functions for inducing points. Empirically we conduct toy experiments to validate the proposed approaches.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here