Saddlepoints in Unsupervised Least Squares

11 Apr 2021  ·  Samuel Gerber ·

This paper sheds light on the risk landscape of unsupervised least squares in the context of deep auto-encoding neural nets. We formally establish an equivalence between unsupervised least squares and principal manifolds. This link provides insight into the risk landscape of auto--encoding under the mean squared error, in particular all non-trivial critical points are saddlepoints. Finding saddlepoints is in itself difficult, overcomplete auto-encoding poses the additional challenge that the saddlepoints are degenerate. Within this context we discuss regularization of auto-encoders, in particular bottleneck, denoising and contraction auto-encoding and propose a new optimization strategy that can be framed as particular form of contractive regularization.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here