This paper introduces the Variational Determinant Estimator (VDE), a variational extension of the recently proposed determinant estimator discovered by arXiv:2005. 06553v2.
Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.
Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.
Autoregressive models (ARMs) currently hold state-of-the-art performance in likelihood-based modeling of image and audio data.
Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula.
We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes.