Inductive-Biases for Contrastive Learning of Disentangled Representations

29 Sep 2021  ·  Jonathan Kahana, Yedid Hoshen ·

Learning disentangled representations is a core machine learning task. It has been shown that this task requires inductive biases. Recent work on class-content disentanglement has shown excellent performance, but required generative modeling of the entire dataset, which can be very demanding. Current discriminative approaches are typically based on adversarial-training and do not reach comparable accuracy. In this paper, we investigate how to transfer the inductive-biases implicit in generative-approaches to contrastive methods. Based on our findings we proposed a new, non-adversarial and non-generative method named \modelName: Augmentation Based Contrastive Disentanglement. ABCD uses contrastive representation learning relying only on content-invariant augmentations to achieve domain-disentangled representations. The discriminative approach, makes ABCD much faster to train relative to other generative approaches. We evaluate ABCD on image translation and retrieval tasks, and obtain state-of-the-art results.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here