2 code implementations • ICLR 2022 • Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions.
Ranked #8 on Image Generation on CIFAR-10 (bits/dimension metric)
3 code implementations • 31 Mar 2022 • Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, Max Welling
This work introduces a diffusion model for molecule generation in 3D that is equivariant to Euclidean transformations.
5 code implementations • 19 Feb 2021 • Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs).
3 code implementations • NeurIPS 2020 • Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling
Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.
2 code implementations • NeurIPS 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function.
1 code implementation • 26 Jan 2023 • Emiel Hoogeboom, Jonathan Heek, Tim Salimans
Currently, applying diffusion models in pixel space of high resolution images is difficult.
Ranked #4 on Conditional Image Generation on ImageNet 128x128
1 code implementation • NeurIPS 2019 • Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, Max Welling
For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data.
1 code implementation • 14 Nov 2020 • T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forré, Max Welling
Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework.
1 code implementation • ICLR 2018 • Emiel Hoogeboom, Jorn W. T. Peters, Taco S. Cohen, Max Welling
We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.
1 code implementation • NeurIPS 2021 • Victor Garcia Satorras, Emiel Hoogeboom, Fabian B. Fuchs, Ingmar Posner, Max Welling
This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs).
1 code implementation • 30 Jan 2019 • Emiel Hoogeboom, Rianne van den Berg, Max Welling
We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes.
1 code implementation • NeurIPS 2020 • Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling
Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.
1 code implementation • 29 Nov 2019 • Christina Winkler, Daniel Worrall, Emiel Hoogeboom, Max Welling
Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula.
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Taco S. Cohen, Jakub M. Tomczak
Media is generally stored digitally and is therefore discrete.
no code implementations • ICML 2020 • Auke Wiggers, Emiel Hoogeboom
Autoregressive models (ARMs) currently hold state-of-the-art performance in likelihood-based modeling of image and audio data.
no code implementations • pproximateinference AABI Symposium 2021 • Simon Passenheim, Emiel Hoogeboom
This paper introduces the Variational Determinant Estimator (VDE), a variational extension of the recently proposed determinant estimator discovered by arXiv:2005. 06553v2.
no code implementations • ICML Workshop INNF 2021 • Alexandra Lindt, Emiel Hoogeboom
Discrete flow-based models are a recently proposed class of generative models that learn invertible transformations for discrete random variables.
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
This paper introduces a new method to define and train continuous distributions such as normalizing flows directly on categorical data, for example text and image segmentation.
no code implementations • 12 Sep 2022 • Emiel Hoogeboom, Tim Salimans
Recently, Rissanen et al., (2022) have presented a new type of diffusion process for generative modeling based on heat dissipation, or blurring, as an alternative to isotropic Gaussian diffusion.
no code implementations • 26 May 2023 • Emiel Hoogeboom, Eirikur Agustsson, Fabian Mentzer, Luca Versari, George Toderici, Lucas Theis
Despite the tremendous success of diffusion generative models in text-to-image generation, replicating this success in the domain of image compression has proven difficult.
no code implementations • 13 Jun 2023 • Allan Jabri, Sjoerd van Steenkiste, Emiel Hoogeboom, Mehdi S. M. Sajjadi, Thomas Kipf
In this paper, we leverage recent progress in diffusion models to equip 3D scene representation learning models with the ability to render high-fidelity novel views, while retaining benefits such as object-level scene editing to a large degree.
no code implementations • 12 Feb 2024 • David Ruhe, Jonathan Heek, Tim Salimans, Emiel Hoogeboom
Diffusion models have recently been increasingly applied to temporal data such as video, fluid mechanics simulations, or climate data.
no code implementations • 11 Mar 2024 • Jonathan Heek, Emiel Hoogeboom, Tim Salimans
By increasing the sample budget from a single step to 2-8 steps, we can train models more easily that generate higher quality samples, while retaining much of the sampling speed benefits.