no code implementations • ICML 2020 • Abhishek Kumar, Ben Poole
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood.
no code implementations • 1 Jun 2023 • Dave Epstein, Allan Jabri, Ben Poole, Alexei A. Efros, Aleksander Holynski
However, many aspects of an image are difficult or impossible to convey through text.
no code implementations • 27 Apr 2023 • Guandao Yang, Abhijit Kundu, Leonidas J. Guibas, Jonathan T. Barron, Ben Poole
Neural Radiance Fields (NeRFs) have emerged as a powerful neural 3D representation for objects and scenes derived from 2D data.
no code implementations • 23 Mar 2023 • Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, Yuanzhen Li, Varun Jampani
We present DreamBooth3D, an approach to personalize text-to-3D generative models from as few as 3-6 casually captured images of a subject.
1 code implementation • 17 Nov 2022 • Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agrawal, Ben Poole, Igor Mordatch, Adam Roberts, Jascha Sohl-Dickstein
While deep learning models have replaced hand-designed features across many domains, these models are still trained with hand-designed optimizers.
no code implementations • 5 Oct 2022 • Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, Tim Salimans
We present Imagen Video, a text-conditional video generation system based on a cascade of video diffusion models.
Ranked #1 on
Video Generation
on LAION-400M
3 code implementations • 29 Sep 2022 • Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss.
4 code implementations • CVPR 2022 • Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision.
1 code implementation • NeurIPS 2021 • Diederik Kingma, Tim Salimans, Ben Poole, Jonathan Ho
In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.
1 code implementation • ICLR 2022 • Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models (Uria et al., 2014) and absorbing discrete diffusion (Austin et al., 2021), which we show are special cases of ARDMs under mild assumptions.
Ranked #6 on
Image Generation
on CIFAR-10
(bits/dimension metric)
4 code implementations • 1 Jul 2021 • Diederik P. Kingma, Tim Salimans, Ben Poole, Jonathan Ho
In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints.
Ranked #1 on
Image Generation
on CIFAR-10
(bits/dimension metric)
1 code implementation • 1 Jan 2021 • Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
We present TaskSet, a dataset of tasks for use in training and evaluating optimizers.
no code implementations • 1 Jan 2021 • Luke Metz, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
In this work we focus on general-purpose learned optimizers capable of training a wide variety of problems with no user-specified hyperparameters.
2 code implementations • ICLR 2021 • Ruiqi Gao, Yang song, Ben Poole, Ying Nian Wu, Diederik P. Kingma
Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset.
Ranked #14 on
Image Generation
on CelebA 64x64
8 code implementations • ICLR 2021 • Yang song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9. 89 and FID of 2. 20, a competitive likelihood of 2. 99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
Ranked #11 on
Image Generation
on CIFAR-10
no code implementations • pproximateinference AABI Symposium 2021 • Alexander A Alemi, Warren R Morningstar, Ben Poole, Ian Fischer, Joshua V Dillon
In discriminative settings such as regression and classification there are two random variables at play, the inputs X and the targets Y.
no code implementations • 15 Oct 2020 • Matt Shannon, Ben Poole, Soroosh Mariooryad, Tom Bagby, Eric Battenberg, David Kao, Daisy Stanton, RJ Skerry-Ryan
Non-saturating generative adversarial network (GAN) training is widely used and has continued to obtain groundbreaking results.
no code implementations • 23 Sep 2020 • Luke Metz, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
In this work we focus on general-purpose learned optimizers capable of training a wide variety of problems with no user-specified hyperparameters.
1 code implementation • NeurIPS 2020 • Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola
Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.
Ranked #2 on
Contrastive Learning
on imagenet-1k
no code implementations • 27 Feb 2020 • Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
We present TaskSet, a dataset of tasks for use in training and evaluating optimizers.
no code implementations • 20 Feb 2020 • Abhishek Kumar, Ben Poole, Kevin Murphy
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
3 code implementations • ICML 2020 • Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen
Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.
no code implementations • 31 Jan 2020 • Abhishek Kumar, Ben Poole
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood.
1 code implementation • ICLR 2020 • Rui Shu, Yining Chen, Abhishek Kumar, Stefano Ermon, Ben Poole
Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning.
no code implementations • 21 Oct 2019 • Zhe Dong, Deniz Oktay, Ben Poole, Alexander A. Alemi
Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future.
no code implementations • 25 Sep 2019 • Zhe Dong, Deniz Oktay, Ben Poole, Alexander A. Alemi
Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future.
2 code implementations • 6 Jun 2019 • Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, Ekin D. Cubuk
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions.
2 code implementations • NeurIPS 2019 • Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown.
Ranked #16 on
Language Modelling
on Text8
3 code implementations • 16 May 2019 • Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A. Alemi, George Tucker
Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging.
no code implementations • ICLR 2019 • Ali Razavi, Aäron van den Oord, Ben Poole, Oriol Vinyals
Due to the phenomenon of "posterior collapse," current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires augmenting the objective so it does not only maximize the likelihood of the data.
Ranked #7 on
Image Generation
on ImageNet 32x32
(bpd metric)
no code implementations • ICLR 2018 • Alex Alemi, Ben Poole, Ian Fischer, Josh Dillon, Rif A. Saurus, Kevin Murphy
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.
1 code implementation • ICML 2018 • Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy
Recent work in unsupervised representation learning has focused on learning deep directed latent-variable models.
5 code implementations • ICML 2017 • Friedemann Zenke, Ben Poole, Surya Ganguli
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.
no code implementations • 8 Dec 2016 • Ben Poole, Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova
We present a framework to understand GAN training as alternating density ratio estimation and approximate divergence minimization.
no code implementations • 24 Nov 2016 • Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein
This quantity grows exponentially in the depth of the network, and is responsible for the depth sensitivity observed.
9 code implementations • 7 Nov 2016 • Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator.
19 code implementations • 3 Nov 2016 • Eric Jang, Shixiang Gu, Ben Poole
Categorical variables are a natural choice for representing discrete structure in the world.
1 code implementation • ICLR 2017 2016 • Eric Jang, Shixiang Gu, Ben Poole
Categorical variables are a natural choice for representing discrete structure in the world.
1 code implementation • NeurIPS 2016 • Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli
We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights.
no code implementations • ICML 2017 • Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein
We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute.
9 code implementations • 2 Jun 2016 • Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.
2 code implementations • 10 Nov 2015 • Jonathan T. Barron, Ben Poole
We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms.
no code implementations • 6 Jun 2014 • Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli
Autoencoders have emerged as a useful framework for unsupervised learning of internal representations, and a wide variety of apparently conceptually disparate regularization techniques have been proposed to generate useful features.
1 code implementation • 9 Nov 2013 • Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli
This algorithm contrasts with earlier stochastic second order techniques that treat the Hessian of each contributing function as a noisy approximation to the full Hessian, rather than as a target for direct estimation.