no code implementations • 11 Sep 2024 • Fengzhe Zhang, Jiajun He, Laurence I. Midgley, Javier Antorán, José Miguel Hernández-Lobato
Diffusion models have shown promising potential for advancing Boltzmann Generators.
1 code implementation • 28 May 2024 • Jihao Andreas Lin, Shreyas Padhy, Bruno Mlodozeniec, Javier Antorán, José Miguel Hernández-Lobato
Scaling hyperparameter optimisation to very large datasets remains an open problem in the Gaussian process community.
1 code implementation • 4 Mar 2024 • James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato
Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.
1 code implementation • 31 Oct 2023 • Jihao Andreas Lin, Shreyas Padhy, Javier Antorán, Austin Tripp, Alexander Terenin, Csaba Szepesvári, José Miguel Hernández-Lobato, David Janz
We study the use of stochastic gradient descent for solving this linear system, and show that when \emph{done right} -- by which we mean using specific insights from the optimisation and kernel communities -- stochastic gradient descent is highly effective.
1 code implementation • NeurIPS 2023 • Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato
Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.
no code implementations • 12 Jul 2023 • Jihao Andreas Lin, Javier Antorán, José Miguel Hernández-Lobato
The Laplace approximation provides a closed-form model selection objective for neural networks (NN).
1 code implementation • NeurIPS 2023 • Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, Alexander Terenin
Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems.
1 code implementation • 20 Feb 2023 • Riccardo Barbano, Javier Antorán, Johannes Leuschner, José Miguel Hernández-Lobato, Bangti Jin, Željko Kereta
Deep learning has been widely used for solving image reconstruction tasks but its deployability has been held back due to the shortage of high-quality training data.
1 code implementation • 10 Oct 2022 • Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato
Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.
1 code implementation • 11 Jul 2022 • Riccardo Barbano, Johannes Leuschner, Javier Antorán, Bangti Jin, José Miguel Hernández-Lobato
We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction.
no code implementations • 17 Jun 2022 • Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato
The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.
2 code implementations • 28 Feb 2022 • Javier Antorán, Riccardo Barbano, Johannes Leuschner, José Miguel Hernández-Lobato, Bangti Jin
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment.
no code implementations • NeurIPS Workshop ICBINB 2021 • Chelsea Murray, James U. Allingham, Javier Antorán, José Miguel Hernández-Lobato
Farquhar et al. [2021] show that correcting for active learning bias with underparameterised models leads to improved downstream performance.
no code implementations • 13 Dec 2021 • Chelsea Murray, James U. Allingham, Javier Antorán, José Miguel Hernández-Lobato
In active learning, the size and complexity of the training dataset changes over time.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
1 code implementation • 28 Oct 2020 • Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato
In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.
1 code implementation • NeurIPS 2020 • Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited.
1 code implementation • ICLR 2021 • Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato
Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems.
1 code implementation • 6 Feb 2020 • Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato
One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.