Search Results for author: David Janz

Found 13 papers, 6 papers with code

Ensemble sampling for linear bandits: small ensembles suffice

no code implementations14 Nov 2023 David Janz, Alexander E. Litvak, Csaba Szepesvári

We provide the first useful and rigorous analysis of ensemble sampling for the stochastic linear bandit setting.

Exploration via linearly perturbed loss minimisation

no code implementations13 Nov 2023 David Janz, Shuai Liu, Alex Ayoub, Csaba Szepesvári

We show that, for the case of generalised linear bandits, EVILL reduces to perturbed history exploration (PHE), a method where exploration is done by training on randomly perturbed rewards.

Thompson Sampling

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

1 code implementation NeurIPS 2023 Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, Alexander Terenin

Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems.

Bayesian Optimization Decision Making +1

Sampling-based inference for large linear models, with application to linearised Laplace

1 code implementation10 Oct 2022 Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato

Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.

Bayesian Inference Uncertainty Quantification

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

no code implementations17 Jun 2022 Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato

The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.

Model Selection

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection +1

Bandit optimisation of functions in the Matérn kernel RKHS

no code implementations28 Jan 2020 David Janz, David R. Burt, Javier González

We consider the problem of optimising functions in the reproducing kernel Hilbert space (RKHS) of a Mat\'ern kernel with smoothness parameter $\nu$ over the domain $[0, 1]^d$ under noisy bandit feedback.

Learning a Generative Model for Validity in Complex Discrete Structures

1 code implementation ICLR 2018 David Janz, Jos van der Westhuizen, Brooks Paige, Matt J. Kusner, José Miguel Hernández-Lobato

This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences -- and thus faithfully model discrete objects.

valid

Probabilistic structure discovery in time series data

no code implementations21 Nov 2016 David Janz, Brooks Paige, Tom Rainforth, Jan-Willem van de Meent, Frank Wood

Existing methods for structure discovery in time series data construct interpretable, compositional kernels for Gaussian process regression models.

regression Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.