Search Results for author: Emiel Hoogeboom

Found 14 papers, 9 papers with code

Discrete Denoising Flows

no code implementations24 Jul 2021 Alexandra Lindt, Emiel Hoogeboom

Discrete flow-based models are a recently proposed class of generative models that learn invertible transformations for discrete random variables.

Denoising

E(n) Equivariant Normalizing Flows

no code implementations19 May 2021 Victor Garcia Satorras, Emiel Hoogeboom, Fabian B. Fuchs, Ingmar Posner, Max Welling

This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs).

E(n) Equivariant Graph Neural Networks

2 code implementations19 Feb 2021 Victor Garcia Satorras, Emiel Hoogeboom, Max Welling

This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs).

Representation Learning

Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions

2 code implementations10 Feb 2021 Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling

Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function.

Denoising Language Modelling +1

Variational Determinant Estimation with Spherical Normalizing Flows

no code implementations24 Dec 2020 Simon Passenheim, Emiel Hoogeboom

This paper introduces the Variational Determinant Estimator (VDE), a variational extension of the recently proposed determinant estimator discovered by arXiv:2005. 06553v2.

Density Estimation Variational Inference

Self Normalizing Flows

1 code implementation14 Nov 2020 T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forré, Max Welling

Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework.

SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows

2 code implementations NeurIPS 2020 Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling

Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.

The Convolution Exponential and Generalized Sylvester Flows

1 code implementation NeurIPS 2020 Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling

Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.

Predictive Sampling with Forecasting Autoregressive Models

no code implementations ICML 2020 Auke Wiggers, Emiel Hoogeboom

Autoregressive models (ARMs) currently hold state-of-the-art performance in likelihood-based modeling of image and audio data.

Learning Discrete Distributions by Dequantization

no code implementations30 Jan 2020 Emiel Hoogeboom, Taco S. Cohen, Jakub M. Tomczak

Media is generally stored digitally and is therefore discrete.

Learning Likelihoods with Conditional Normalizing Flows

2 code implementations29 Nov 2019 Christina Winkler, Daniel Worrall, Emiel Hoogeboom, Max Welling

Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula.

Structured Prediction Super-Resolution

Integer Discrete Flows and Lossless Compression

1 code implementation NeurIPS 2019 Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, Max Welling

For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data.

Emerging Convolutions for Generative Normalizing Flows

1 code implementation30 Jan 2019 Emiel Hoogeboom, Rianne van den Berg, Max Welling

We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes.

Image Generation

HexaConv

1 code implementation ICLR 2018 Emiel Hoogeboom, Jorn W. T. Peters, Taco S. Cohen, Max Welling

We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.

Aerial Scene Classification Scene Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.