Search Results for author: Didrik Nielsen

Found 11 papers, 9 papers with code

Few-Shot Diffusion Models

1 code implementation30 May 2022 Giorgio Giannone, Didrik Nielsen, Ole Winther

At test time, the model is able to generate samples from previously unseen classes conditioned on as few as 5 samples from that class.

Denoising Few-Shot Learning

Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC

1 code implementation4 Feb 2021 Priyank Jaini, Didrik Nielsen, Max Welling

Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.

Argmax Flows: Learning Categorical Distributions with Normalizing Flows

no code implementations pproximateinference AABI Symposium 2021 Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling

This paper introduces a new method to define and train continuous distributions such as normalizing flows directly on categorical data, for example text and image segmentation.

Image Segmentation Semantic Segmentation

SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows

3 code implementations NeurIPS 2020 Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling

Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.

Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow

1 code implementation NeurIPS 2020 Didrik Nielsen, Ole Winther

Flow models have recently made great progress at modeling ordinal discrete data such as images and audio.

SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient

2 code implementations NeurIPS 2018 Aaron Mishkin, Frederik Kunstner, Didrik Nielsen, Mark Schmidt, Mohammad Emtiyaz Khan

Uncertainty estimation in large deep-learning models is a computationally challenging task, where it is difficult to form even a Gaussian approximation to the posterior distribution.

Variational Inference

Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models

1 code implementation12 Jul 2018 Mohammad Emtiyaz Khan, Didrik Nielsen

Bayesian inference plays an important role in advancing machine learning, but faces computational challenges when applied to complex models such as deep neural networks.

Bayesian Inference Variational Inference

Variational Adaptive-Newton Method for Explorative Learning

no code implementations15 Nov 2017 Mohammad Emtiyaz Khan, Wu Lin, Voot Tangkaratt, Zuozhu Liu, Didrik Nielsen

We present the Variational Adaptive Newton (VAN) method which is a black-box optimization method especially suitable for explorative-learning tasks such as active learning and reinforcement learning.

Active Learning reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.