Search Results for author: Javier Antoran

Found 5 papers, 1 papers with code

Deep End-to-end Causal Inference

1 code implementation4 Feb 2022 Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang

Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment and policy making.

Causal Discovery Causal Inference +1

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection +1

A Probabilistic Deep Image Prior over Image Space

no code implementations pproximateinference AABI Symposium 2022 Riccardo Barbano, Javier Antoran, José Miguel Hernández-Lobato, Bangti Jin

The deep image prior regularises under-specified image reconstruction problems by reparametrising the target image as the output of a CNN.

Image Reconstruction

Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference

no code implementations pproximateinference AABI Symposium 2021 Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato

In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.

Bayesian Inference

Disentangling and Learning Robust Representations with Natural Clustering

no code implementations27 Jan 2019 Javier Antoran, Antonio Miguel

Learning representations that disentangle the underlying factors of variability in data is an intuitive way to achieve generalization in deep models.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.