Eliminating the Invariance on the Loss Landscape of Linear Autoencoders

In this paper, we propose a new loss function for linear autoencoders (LAEs) and then analytically identify the structure of the loss surface. Optimizing the conventional Mean Square Error (MSE) loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This shortcoming originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e., the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we characterize the full structure of the loss landscape in the following sense: we establish analytical expression for the set of all critical points, show that it is a subset of critical points of MSE, and that all local minima are still global. However, the invariant global minima under MSE become saddle points under the new loss. Moreover, we show that the order of computational complexity of the loss and its gradients are the same as MSE and, hence, the new loss is not only of theoretical importance but is of practical value, e.g., for low-rank approximation.

PDF ICML 2020 PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here