no code implementations • 1 Feb 2023 • Marloes Arts, Victor Garcia Satorras, Chin-wei Huang, Daniel Zuegner, Marco Federici, Cecilia Clementi, Frank Noé, Robert Pinsler, Rianne van den Berg
Coarse-grained (CG) molecular dynamics enables the study of biological processes at temporal and spatial scales that would be intractable at an atomistic resolution.
1 code implementation • 30 Sep 2022 • Kevin E. Wu, Kevin K. Yang, Rianne van den Berg, James Y. Zou, Alex X. Lu, Ava P. Amini
The ability to computationally generate novel yet physically foldable protein structures could lead to new biological discoveries and new treatments targeting yet incurable diseases.
1 code implementation • 8 Sep 2022 • Johannes Brandstetter, Rianne van den Berg, Max Welling, Jayesh K. Gupta
We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations.
2 code implementations • ICLR 2022 • Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions.
Ranked #8 on Image Generation on CIFAR-10 (bits/dimension metric)
no code implementations • 29 Sep 2021 • Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal Kalchbrenner, Hanie Sedghi
It is shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution; self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution.
no code implementations • ICML Workshop INNF 2021 • Daniel D. Johnson, Jacob Austin, Rianne van den Berg, Daniel Tarlow
Denoising diffusion probabilistic models (DDPMs) have shown impressive results on sequence generation by iteratively corrupting each example and then learning to map corrupted versions back to the original.
3 code implementations • NeurIPS 2021 • Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, Rianne van den Berg
Here, we introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusion model of Hoogeboom et al. 2021, by going beyond corruption processes with uniform transition probabilities.
1 code implementation • 10 Jun 2021 • Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal Kalchbrenner, Hanie Sedghi
It has been shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution, self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution.
2 code implementations • NeurIPS 2020 • Alexey A. Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner
Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.
no code implementations • ICLR 2021 • Rianne van den Berg, Alexey A. Gritsenko, Mostafa Dehghani, Casper Kaae Sønderby, Tim Salimans
Furthermore, we zoom in on the effect of gradient bias due to the straight-through estimator in integer discrete flows, and demonstrate that its influence is highly dependent on architecture choices and less prominent than previously thought.
1 code implementation • 18 Jun 2019 • Karen Ullrich, Rianne van den Berg, Marcus Brubaker, David Fleet, Max Welling
Finally, we demonstrate how the reconstruction algorithm can be extended with an amortized inference scheme on unknown attributes such as object pose.
1 code implementation • NeurIPS 2019 • Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, Max Welling
For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data.
1 code implementation • 30 Jan 2019 • Emiel Hoogeboom, Rianne van den Berg, Max Welling
We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes.
no code implementations • 12 Oct 2018 • Bastiaan S. Veeling, Rianne van den Berg, Max Welling
High-risk domains require reliable confidence estimates from predictive models.
2 code implementations • ICLR 2019 • Giorgio Patrini, Rianne van den Berg, Patrick Forré, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, Frank Nielsen
We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the p-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error.
2 code implementations • 15 Mar 2018 • Rianne van den Berg, Leonard Hasenclever, Jakub M. Tomczak, Max Welling
Variational inference relies on flexible approximate posterior distributions.
15 code implementations • 7 Jun 2017 • Rianne van den Berg, Thomas N. Kipf, Max Welling
We consider matrix completion for recommender systems from the point of view of link prediction on graphs.
Ranked #4 on Recommendation Systems on YahooMusic Monti (using extra training data)
27 code implementations • 17 Mar 2017 • Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, Max Welling
We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification.
Ranked #1 on Node Classification on AIFB