Search Results for author: Jacobus W. Portegies

Found 6 papers, 2 papers with code

Neural Langevin Dynamics: towards interpretable Neural Stochastic Differential Equations

no code implementations17 Nov 2022 Simon M. Koop, Mark A. Peletier, Jacobus W. Portegies, Vlado Menkovski

Neural Stochastic Differential Equations (NSDE) have been trained as both Variational Autoencoders, and as GANs.

A Metric for Linear Symmetry-Based Disentanglement

no code implementations26 Nov 2020 Luis A. Pérez Rey, Loek Tonnaer, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies

We propose a metric for the evaluation of the level of LSBD that a data representation achieves.

Disentanglement

Quantifying and Learning Linear Symmetry-Based Disentanglement

1 code implementation NeurIPS 2021 Loek Tonnaer, Luis A. Pérez Rey, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies

The definition of Linear Symmetry-Based Disentanglement (LSBD) formalizes the notion of linearly disentangled representations, but there is currently no metric to quantify LSBD.

Disentanglement Interpretable Machine Learning

Quantifying and Learning Disentangled Representations with Limited Supervision

no code implementations28 Sep 2020 Loek Tonnaer, Luis Armando Pérez Rey, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies

Although several works focus on learning LSBD representations, such methods require supervision on the underlying transformations for the entire dataset, and cannot deal with unlabeled data.

Disentanglement Interpretable Machine Learning

Diffusion Variational Autoencoders

2 code implementations25 Jan 2019 Luis A. Pérez Rey, Vlado Menkovski, Jacobus W. Portegies

A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets.

Cannot find the paper you are looking for? You can Submit a new open access paper.