On the interventional consistency of autoencoders

29 Sep 2021  ·  Giulia Lanzillotta, Felix Leeb, Stefan Bauer, Bernhard Schölkopf ·

Autoencoders have played a crucial role in the field of representation learning since its inception, proving to be a flexible learning scheme able to accommodate various notions of optimality of the representation. The now established idea of disentanglement and the recently popular perspective of causality in representation learning identify modularity and robustness to be essential characteristics of the optimal representation. In this work, we show that the current conceptual tools available to assess the quality of the representation against these criteria (e.g. latent traversals or disentanglement metrics) are inadequate. In this regard, we introduce the notion of \emph{interventional consistency} of a representation and argue that it is a desirable property of any disentangled representation. We develop a general training scheme for autoencoders that takes into account interventional consistency in the optimality condition. We present empirical evidence toward the validity of the approach on three different autoencoders, namely standard autoencoders (AE), variational autoencoders (VAE) and structural autoencoders (SAE). Another key finding in this work is that differentiating between information and structure in the latent space of autoencoders can increase the modularity and interpretability of the resulting representation.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here