Search Results for author: Joost van Amersfoort

Found 16 papers, 7 papers with code

Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network

no code implementations ICML 2020 Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal

We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

1 code implementation ICLR 2022 Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.

Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data

1 code implementation NeurIPS 2021 Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, Yarin Gal

We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects.

Active Learning

Deep Deterministic Uncertainty for Semantic Segmentation

no code implementations29 Oct 2021 Jishnu Mukhoti, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

We extend Deep Deterministic Uncertainty (DDU), a method for uncertainty estimation using feature space densities, to semantic segmentation.

Semantic Segmentation

Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective

no code implementations4 Jun 2021 Lewis Smith, Joost van Amersfoort, Haiwen Huang, Stephen Roberts, Yarin Gal

ResNets constrained to be bi-Lipschitz, that is, approximately distance preserving, have been a crucial component of recently proposed techniques for deterministic uncertainty quantification in neural models.

Deep Deterministic Uncertainty: A Simple Baseline

4 code implementations23 Feb 2021 Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.

Active Learning

On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty

1 code implementation22 Feb 2021 Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal

Inducing point Gaussian process approximations are often considered a gold standard in uncertainty estimation since they retain many of the properties of the exact GP and scale to large datasets.

Gaussian Processes General Classification

Variational Deterministic Uncertainty Quantification

no code implementations1 Jan 2021 Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal

Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ).

Causal Inference regression

Single Shot Structured Pruning Before Training

no code implementations1 Jul 2020 Joost van Amersfoort, Milad Alizadeh, Sebastian Farquhar, Nicholas Lane, Yarin Gal

We introduce a method to speed up training by 2x and inference by 3x in deep neural networks using structured pruning applied before training.

Uncertainty Estimation Using a Single Deep Deterministic Neural Network

2 code implementations4 Mar 2020 Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal

We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.

Out-of-Distribution Detection

BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning

2 code implementations NeurIPS 2019 Andreas Kirsch, Joost van Amersfoort, Yarin Gal

We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning.

Active Learning

Deep Hashing using Entropy Regularised Product Quantisation Network

no code implementations11 Feb 2019 Jo Schlemper, Jose Caballero, Andy Aitken, Joost van Amersfoort

In large scale systems, approximate nearest neighbour search is a crucial algorithm to enable efficient data retrievals.

Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks

no code implementations16 Nov 2017 Joost van Amersfoort, Wenzhe Shi, Alejandro Acosta, Francisco Massa, Johannes Totz, Zehan Wang, Jose Caballero

To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses.

Transformation-Based Models of Video Sequences

no code implementations29 Jan 2017 Joost van Amersfoort, Anitha Kannan, Marc'Aurelio Ranzato, Arthur Szlam, Du Tran, Soumith Chintala

In this work we propose a simple unsupervised approach for next frame prediction in video.

Cannot find the paper you are looking for? You can Submit a new open access paper.