no code implementations • 17 Nov 2022 • Simon M. Koop, Mark A. Peletier, Jacobus W. Portegies, Vlado Menkovski
Neural Stochastic Differential Equations (NSDE) have been trained as both Variational Autoencoders, and as GANs.
no code implementations • 18 Dec 2020 • Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, Jaron Sanders, Albert Senen-Cerda
The first theorem applies to dropout networks in the random mode.
no code implementations • 26 Nov 2020 • Luis A. Pérez Rey, Loek Tonnaer, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies
We propose a metric for the evaluation of the level of LSBD that a data representation achieves.
1 code implementation • NeurIPS 2021 • Loek Tonnaer, Luis A. Pérez Rey, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies
The definition of Linear Symmetry-Based Disentanglement (LSBD) formalizes the notion of linearly disentangled representations, but there is currently no metric to quantify LSBD.
no code implementations • 28 Sep 2020 • Loek Tonnaer, Luis Armando Pérez Rey, Vlado Menkovski, Mike Holenderski, Jacobus W. Portegies
Although several works focus on learning LSBD representations, such methods require supervision on the underlying transformations for the entire dataset, and cannot deal with unlabeled data.
2 code implementations • 25 Jan 2019 • Luis A. Pérez Rey, Vlado Menkovski, Jacobus W. Portegies
A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets.