Disentanglement

576 papers with code • 3 benchmarks • 12 datasets

This is an approach to solve a diverse set of tasks in a data efficient manner by disentangling (or isolating ) the underlying structure of the main problem into disjoint parts of its representations. This disentanglement can be done by focussing on the "transformation" properties of the world(main problem)

Libraries

Use these libraries to find Disentanglement models and implementations

Most implemented papers

A Style-Based Generator Architecture for Generative Adversarial Networks

NVlabs/stylegan CVPR 2019

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.

Disentangling by Factorising

clementchadebec/benchmark_VAE ICML 2018

We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation.

Adversarial Latent Autoencoders

podgorskiy/ALAE CVPR 2020

We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE.

Isolating Sources of Disentanglement in Variational Autoencoders

rtqichen/beta-tcvae NeurIPS 2018

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables.

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

google-research/disentanglement_lib ICML 2019

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.

Sigmoid Loss for Language Image Pre-Training

google-research/big_vision ICCV 2023

We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

AntixK/PyTorch-VAE ICLR 2017

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.

Learning concise representations for regression by evolving networks of trees

lacava/feat ICLR 2019

We propose and study a method for learning interpretable representations for the task of regression.

LEO: Generative Latent Image Animator for Human Video Synthesis

wyhsirius/LEO 6 May 2023

Our key idea is to represent motion as a sequence of flow maps in the generation process, which inherently isolate motion from appearance.

On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

rr-learning/disentanglement_dataset NeurIPS 2019

Learning meaningful and compact representations with disentangled semantic aspects is considered to be of key importance in representation learning.