You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • ICML 2020 • Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal

We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.

no code implementations • ICML 2020 • Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes with RKHS Fourier Features, an extension of shallow inter-domain GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs) and demonstrate how to leverage existing approximate inference approaches to perform simple and scalable approximate inference on Inter-domain Deep Gaussian Processes.

no code implementations • 21 Apr 2022 • Andrew Jesson, Alyson Douglas, Peter Manshausen, Nicolai Meinshausen, Philip Stier, Yarin Gal, Uri Shalit

Here, we develop a continuous treatment-effect marginal sensitivity model (CMSM) and derive bounds that agree with both the observed data and a researcher-defined level of hidden confounding.

no code implementations • 3 Mar 2022 • Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schölkopf, Yarin Gal, Stefan Bauer

Causal discovery from observational and interventional data is challenging due to limited data and non-identifiability which introduces uncertainties in estimating the underlying structural causal model (SCM).

1 code implementation • ICLR 2022 • Milad Alizadeh, Shyam A. Tailor, Luisa M Zintgraf, Joost van Amersfoort, Sebastian Farquhar, Nicholas Donald Lane, Yarin Gal

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference.

1 code implementation • 14 Feb 2022 • Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth

We find that ASEs offer greater label-efficiency than the current state-of-the-art when applied to challenging model evaluation problems for deep neural networks.

no code implementations • 3 Feb 2022 • Andreas Kirsch, Yarin Gal

Jiang et al. (2021) give empirical evidence that the average test error of deep neural networks can be estimated via the prediction disagreement of two separately trained networks.

no code implementations • 24 Dec 2021 • Miroslav Fil, Binxin Ru, Clare Lyle, Yarin Gal

The success of neural architecture search (NAS) has historically been limited by excessive compute requirements.

1 code implementation • 19 Dec 2021 • Raghav Mehta, Angelos Filos, Ujjwal Baid, Chiharu Sako, Richard McKinley, Michael Rebsamen, Katrin Dätwyler, Raphael Meier, Piotr Radojewski, Gowtham Krishnan Murugesan, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Fang F. Yu, Baowei Fei, Ananth J. Madhuranthakam, Joseph A. Maldjian, Laura Daza, Catalina Gómez, Pablo Arbeláez, Chengliang Dai, Shuo Wang, Hadrien Raynaud, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia Bai, Subhashis Banerjee, Linmin Pei, Murat AK, Sarahi Rosas-González, Illyess Zemmoura, Clovis Tauber, Minh H. Vu, Tufve Nyholm, Tommy Löfstedt, Laura Mora Ballestar, Veronica Vilaplana, Hugh McHugh, Gonzalo Maso Talou, Alan Wang, Jay Patel, Ken Chang, Katharina Hoebel, Mishka Gidwani, Nishanth Arun, Sharut Gupta, Mehak Aggarwal, Praveer Singh, Elizabeth R. Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, Lasitha Vidyaratne, Md Monibor Rahman, Khan M. Iftekharuddin, Joseph Chazalon, Elodie Puybareau, Guillaume Tochon, Jun Ma, Mariano Cabezas, Xavier Llado, Arnau Oliver, Liliana Valencia, Sergi Valverde, Mehdi Amian, Mohammadreza Soltaninejad, Andriy Myronenko, Ali Hatamizadeh, Xue Feng, Quan Dou, Nicholas Tustison, Craig Meyer, Nisarg A. Shah, Sanjay Talbar, Marc-Andr Weber, Abhishek Mahajan, Andras Jakab, Roland Wiest, Hassan M. Fathallah-Shaykh, Arash Nazeri, Mikhail Milchenko, Daniel Marcus, Aikaterini Kotrotsou, Rivka Colen, John Freymann, Justin Kirby, Christos Davatzikos, Bjoern Menze, Spyridon Bakas, Yarin Gal, Tal Arbel

In this study, we explore and evaluate a metric developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation.

no code implementations • 1 Dec 2021 • Haiwen Huang, Joost van Amersfoort, Yarin Gal

Uncertainty estimation is a key component in any deployed machine learning system.

no code implementations • NeurIPS 2021 • Tim G. J. Rudner, Cong Lu, Michael Osborne, Yarin Gal, Yee Teh

KL-regularized reinforcement learning from expert demonstrations has proved highly successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks.

1 code implementation • 29 Nov 2021 • Benedikt Höltgen, Lisa Schut, Jan M. Brauner, Yarin Gal

This is the aim of algorithms generating counterfactual explanations.

no code implementations • 15 Nov 2021 • Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal

In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations.

no code implementations • 5 Nov 2021 • Muhammed Razzak, Gonzalo Mateo-Garcia, Luis Gómez-Chova, Yarin Gal, Freddie Kalaitzis

High resolution remote sensing imagery is used in broad range of tasks, including detection and classification of objects.

1 code implementation • NeurIPS 2021 • Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, Yarin Gal

We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects.

no code implementations • 29 Oct 2021 • Jishnu Mukhoti, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

We extend Deep Deterministic Uncertainty (DDU), a method for uncertainty estimation using feature space densities, to semantic segmentation.

no code implementations • 28 Oct 2021 • Andrew Jesson, Peter Manshausen, Alyson Douglas, Duncan Watson-Parris, Yarin Gal, Philip Stier

Aerosol-cloud interactions include a myriad of effects that all begin when aerosol enters a cloud and acts as cloud condensation nuclei (CCN).

no code implementations • ICLR 2022 • Arash Mehrjou, Ashkan Soleymani, Andrew Jesson, Pascal Notin, Yarin Gal, Stefan Bauer, Patrick Schwab

GeneDisco contains a curated set of multiple publicly available experimental data sets as well as open-source implementations of state-of-the-art active learning policies for experimental design and exploration.

no code implementations • 29 Jul 2021 • Owen Convery, Lewis Smith, Yarin Gal, Adi Hanuka

Virtual Diagnostic (VD) is a deep learning tool that can be used to predict a diagnostic output.

2 code implementations • 15 Jul 2021 • Andrey Malinin, Neil Band, Ganshin, Alexander, German Chesnokov, Yarin Gal, Mark J. F. Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Roginskiy, Denis, Mariya Shmatova, Panos Tigas, Boris Yangel

However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction.

Ranked #2 on Weather Forecasting on Shifts

no code implementations • 6 Jul 2021 • Sören Mindermann, Muhammed Razzak, Winnie Xu, Andreas Kirsch, Mrinank Sharma, Adrien Morisot, Aidan N. Gomez, Sebastian Farquhar, Jan Brauner, Yarin Gal

We introduce Goldilocks Selection, a technique for faster model training which selects a sequence of training points that are "just right".

1 code implementation • NeurIPS 2021 • Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal

Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e. g., drug-likeness in molecular generation, function approximation with arithmetic expressions).

no code implementations • 22 Jun 2021 • Andreas Kirsch, Yarin Gal

A practical notation can convey valuable intuitions and concisely express new ideas.

no code implementations • 22 Jun 2021 • Andreas Kirsch, Tom Rainforth, Yarin Gal

Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution of the input variables.

no code implementations • 22 Jun 2021 • Andreas Kirsch, Sebastian Farquhar, Parmida Atighehchian, Andrew Jesson, Frederic Branchaud-Charron, Yarin Gal

We show how to extend single-sample acquisition functions to the batch setting.

no code implementations • ICLR 2022 • A. Tuan Nguyen, Toan Tran, Yarin Gal, Philip H. S. Torr, Atılım Güneş Baydin

A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain.

2 code implementations • 7 Jun 2021 • Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran

In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.

no code implementations • 4 Jun 2021 • Lewis Smith, Joost van Amersfoort, Haiwen Huang, Stephen Roberts, Yarin Gal

ResNets constrained to be bi-Lipschitz, that is, approximately distance preserving, have been a crucial component of recently proposed techniques for deterministic uncertainty quantification in neural models.

2 code implementations • NeurIPS 2021 • Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, Yarin Gal

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input.

no code implementations • NeurIPS 2021 • Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

no code implementations • 10 Apr 2021 • Björn Lütjens, Brandon Leshchinskiy, Christian Requena-Mesa, Farrukh Chishtie, Natalia Díaz-Rodríguez, Océane Boulais, Aruna Sankaranarayanan, Aaron Piña, Yarin Gal, Chedy Raïssi, Alexander Lavin, Dava Newman

Our work aims to enable more visual communication of large-scale climate impacts via visualizing the output of coastal flood models as satellite imagery.

1 code implementation • 16 Mar 2021 • Lisa Schut, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal

Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions.

no code implementations • 10 Mar 2021 • Lorenz Kuhn, Clare Lyle, Aidan N. Gomez, Jonas Rothfuss, Yarin Gal

Existing generalization measures that aim to capture a model's simplicity based on parameter counts or norms fail to explain generalization in overparameterized deep neural networks.

1 code implementation • 9 Mar 2021 • Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth

While approaches like active learning reduce the number of labels needed for model training, existing literature largely ignores the cost of labeling test data, typically unrealistically assuming large test sets for model evaluation.

no code implementations • ICLR Workshop SSL-RL 2021 • Clare Lyle, Amy Zhang, Minqi Jiang, Joelle Pineau, Yarin Gal

To address this, we present a robust exploration strategy which enables causal hypothesis-testing by interaction with the environment.

1 code implementation • 8 Mar 2021 • Andrew Jesson, Sören Mindermann, Yarin Gal, Uri Shalit

We study the problem of learning conditional average treatment effects (CATE) from high-dimensional, observational data with unobserved confounders.

1 code implementation • 24 Feb 2021 • Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar

This allows us to disentangle shared features and dynamics of the environment from agent-specific rewards and policies.

3 code implementations • 23 Feb 2021 • Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.

1 code implementation • 22 Feb 2021 • Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal

Inducing point Gaussian process approximations are often considered a gold standard in uncertainty estimation since they retain many of the properties of the exact GP and scale to large datasets.

1 code implementation • 16 Feb 2021 • Mike Walmsley, Chris Lintott, Tobias Geron, Sandor Kruk, Coleman Krawczyk, Kyle W. Willett, Steven Bamford, Lee S. Kelvin, Lucy Fortson, Yarin Gal, William Keel, Karen L. Masters, Vihang Mehta, Brooke D. Simmons, Rebecca Smethurst, Lewis Smith, Elisabeth M. Baeten, Christine Macmillan

All classifications are used to train an ensemble of Bayesian convolutional neural networks (a state-of-the-art deep learning method) to predict posteriors for the detailed morphology of all 314, 000 galaxies.

1 code implementation • NeurIPS 2021 • A. Tuan Nguyen, Toan Tran, Yarin Gal, Atılım Güneş Baydin

Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains.

no code implementations • 2 Feb 2021 • Panagiotis Tigas, Téo Bloch, Vishal Upendran, Banafsheh Ferdoushi, Mark C. M. Cheung, Siddha Ganju, Ryan M. McGranaghan, Yarin Gal, Asti Bhatt

Modeling and forecasting the solar wind-driven global magnetic field perturbations is an open challenge.

no code implementations • ICLR 2021 • Sebastian Farquhar, Yarin Gal, Tom Rainforth

Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution.

no code implementations • 11 Jan 2021 • Alexander Lavin, Ciarán M. Gilligan-Lee, Alessya Visnjic, Siddha Ganju, Dava Newman, Atılım Güneş Baydin, Sujoy Ganguly, Danny Lange, Amit Sharma, Stephan Zheng, Eric P. Xing, Adam Gibson, James Parr, Chris Mattmann, Yarin Gal

The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.

no code implementations • 10 Jan 2021 • Andreas Kirsch, Yarin Gal

We develop BatchEvaluationBALD, a new acquisition function for deep Bayesian active learning, as an expansion of BatchBALD that takes into account an evaluation set of unlabeled data, for example, the pool set.

no code implementations • 1 Jan 2021 • Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

no code implementations • ICLR 2021 • Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

no code implementations • 1 Jan 2021 • Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal

Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ).

1 code implementation • 27 Dec 2020 • Luiz F. G. dos Santos, Souvik Bose, Valentina Salvatelli, Brad Neuberg, Mark C. M. Cheung, Miho Janvier, Meng Jin, Yarin Gal, Paul Boerner, Atılım Güneş Baydin

Our approach establishes the framework for a novel technique to calibrate EUV instruments and advance our understanding of the cross-channel relation between different EUV channels.

no code implementations • pproximateinference AABI Symposium 2021 • Jishnu Mukhoti, Puneet K. Dokania, Philip H. S. Torr, Yarin Gal

We study batch normalisation in the context of variational inference methods in Bayesian neural networks, such as mean-field or MC Dropout.

no code implementations • pproximateinference AABI Symposium 2021 • Tim G. J. Rudner, Zonghao Chen, Yarin Gal

Bayesian neural networks (BNNs) define distributions over functions induced by distributions over parameters.

no code implementations • 17 Nov 2020 • Mizu Nishikawa-Toomey, Lewis Smith, Yarin Gal

We show that this novel architecture leads to improvements in accuracy when used for the galaxy morphology classification task on the Galaxy Zoo data set.

no code implementations • 1 Nov 2020 • Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes, an extension of inter-domain shallow GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs), and demonstrate how to leverage existing approximate inference methods to perform simple and scalable approximate inference using inter-domain features in DGPs.

1 code implementation • 1 Nov 2020 • Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth

We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues.

no code implementations • NeurIPS 2020 • Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk

This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.

no code implementations • 16 Oct 2020 • Björn Lütjens, Brandon Leshchinskiy, Christian Requena-Mesa, Farrukh Chishtie, Natalia Díaz-Rodriguez, Océane Boulais, Aaron Piña, Dava Newman, Alexander Lavin, Yarin Gal, Chedy Raïssi

As climate change increases the intensity of natural disasters, society needs better tools for adaptation.

1 code implementation • 8 Oct 2020 • Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal

Motivated by poor resource utilisation, we introduce a class of intermediary strategies between local and global learning referred to as interlocking backpropagation.

no code implementations • 28 Sep 2020 • Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

no code implementations • NeurIPS 2020 • Mrinank Sharma, Sören Mindermann, Jan Markus Brauner, Gavin Leech, Anna B. Stephenson, Tomáš Gavenčiak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal

To what extent are effectiveness estimates of nonpharmaceutical interventions (NPIs) against COVID-19 influenced by the assumptions our models make?

no code implementations • 21 Jul 2020 • Pascal Notin, Aidan N. Gomez, Joanna Yoo, Yarin Gal

Pushing forward the compute efficacy frontier in deep learning is critical for tasks that require frequent model re-training or workloads that entail training a large number of models.

1 code implementation • NeurIPS 2020 • Andrew Jesson, Sören Mindermann, Uri Shalit, Yarin Gal

We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data, where standard applications of causal effect approaches fail.

no code implementations • 1 Jul 2020 • Joost van Amersfoort, Milad Alizadeh, Sebastian Farquhar, Nicholas Lane, Yarin Gal

We introduce a method to speed up training by 2x and inference by 3x in deep neural networks using structured pruning applied before training.

2 code implementations • ICML 2020 • Angelos Filos, Panagiotis Tigas, Rowan Mcallister, Nicholas Rhinehart, Sergey Levine, Yarin Gal

Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions.

2 code implementations • 18 Jun 2020 • Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine

We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.

no code implementations • 8 Jun 2020 • Tim Z. Xiao, Aidan N. Gomez, Yarin Gal

We detect out-of-training-distribution sentences in Neural Machine Translation using the Bayesian Deep Learning equivalent of Transformer models.

2 code implementations • NeurIPS 2021 • Binxin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

no code implementations • MIDL 2019 • Raghav Mehta, Angelos Filos, Yarin Gal, Tal Arbel

In this paper, we develop a metric designed to assess and rank uncertainty measures for the task of brain tumour sub-tissue segmentation in the BraTS 2019 sub-challenge on uncertainty quantification.

1 code implementation • ICLR 2020 • Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal

Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input.

no code implementations • 1 May 2020 • Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, Benjamin Bloem-Reddy

Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning.

no code implementations • 7 Apr 2020 • Lewis Smith, Lisa Schut, Yarin Gal, Mark van der Wilk

'Capsule' models try to explicitly represent the poses of objects, enforcing a linear relationship between an object's pose and that of its constituent parts.

no code implementations • 27 Mar 2020 • Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

no code implementations • 23 Mar 2020 • Yarin Gal, Vishnu Jejjala, Damian Kaloni Mayorga Pena, Challenger Mishra

Quantum chromodynamics (QCD) is the theory of the strong interaction.

1 code implementation • ICML 2020 • Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup

Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.

2 code implementations • 4 Mar 2020 • Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal

We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.

no code implementations • NeurIPS 2020 • Sebastian Farquhar, Lewis Smith, Yarin Gal

We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks.

1 code implementation • 22 Dec 2019 • Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon, Yarin Gal

From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines.

no code implementations • 9 Dec 2019 • Jacobo Roa-Vicens, Yuanbo Wang, Virgile Mison, Yarin Gal, Ricardo Silva

In this paper, we explore whether adversarial inverse RL algorithms can be adapted and trained within such latent space simulations from real market data, while maintaining their ability to recover agent rewards robust to variations in the underlying dynamics, and transfer them to new regimes of the original environment.

1 code implementation • 10 Nov 2019 • Valentina Salvatelli, Souvik Bose, Brad Neuberg, Luiz F. G. dos Santos, Mark Cheung, Miho Janvier, Atilim Gunes Baydin, Yarin Gal, Meng Jin

The synergy between machine learning and this enormous amount of data has the potential, still largely unexploited, to advance our understanding of the Sun and extend the capabilities of heliophysics missions.

1 code implementation • 10 Nov 2019 • Brad Neuberg, Souvik Bose, Valentina Salvatelli, Luiz F. G. dos Santos, Mark Cheung, Miho Janvier, Atilim Gunes Baydin, Yarin Gal, Meng Jin

As a part of NASA's Heliophysics System Observatory (HSO) fleet of satellites, the Solar Dynamics Observatory (SDO) has continuously monitored the Sun since2010.

no code implementations • 4 Nov 2019 • Xavier Gitiaux, Shane A. Maloney, Anna Jungbluth, Carl Shneider, Paul J. Wright, Atılım Güneş Baydin, Michel Deudon, Yarin Gal, Alfredo Kalaitzis, Andrés Muñoz-Jaramillo

Machine learning techniques have been successfully applied to super-resolution tasks on natural images where visually pleasing results are sufficient.

no code implementations • 4 Nov 2019 • Anna Jungbluth, Xavier Gitiaux, Shane A. Maloney, Carl Shneider, Paul J. Wright, Alfredo Kalaitzis, Michel Deudon, Atılım Güneş Baydin, Yarin Gal, Andrés Muñoz-Jaramillo

Breakthroughs in our understanding of physical phenomena have traditionally followed improvements in instrumentation.

2 code implementations • ICLR 2020 • Luisa Zintgraf, Kyriacos Shiarlis, Maximilian Igl, Sebastian Schulze, Yarin Gal, Katja Hofmann, Shimon Whiteson

Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning.

no code implementations • 15 Oct 2019 • Chelsea Sidrane, Dylan J Fitzpatrick, Andrew Annex, Diane O'Donoghue, Yarin Gal, Piotr Biliński

In this work, we develop generalizable, multi-basin models of river flooding susceptibility using geographically-distributed data from the USGS stream gauge network.

no code implementations • 4 Oct 2019 • Kara Lamb, Garima Malhotra, Athanasios Vlontzos, Edward Wagstaff, Atılım Günes Baydin, Anahita Bhiwandiwalla, Yarin Gal, Alfredo Kalaitzis, Anthony Reina, Asti Bhatt

High energy particles originating from solar activity travel along the the Earth's magnetic field and interact with the atmosphere around the higher latitudes.

no code implementations • 4 Oct 2019 • Gonzalo Mateo-Garcia, Silviu Oprea, Lewis Smith, Josh Veitch-Michaelis, Guy Schumann, Yarin Gal, Atılım Güneş Baydin, Dietmar Backes

Satellite imaging is a critical technology for monitoring and responding to natural disasters such as flooding.

no code implementations • 3 Oct 2019 • Kara Lamb, Garima Malhotra, Athanasios Vlontzos, Edward Wagstaff, Atılım Günes Baydin, Anahita Bhiwandiwalla, Yarin Gal, Alfredo Kalaitzis, Anthony Reina, Asti Bhatt

We propose a novel architecture and loss function to predict 1 hour in advance the magnitude of phase scintillations within a time window of plus-minus 5 minutes with state-of-the-art performance.

no code implementations • 25 Sep 2019 • Lisa Schut, Yarin Gal

Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification.

no code implementations • 21 Sep 2019 • Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska

Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.

1 code implementation • 2 Jul 2019 • Zachary Kenton, Angelos Filos, Owain Evans, Yarin Gal

Before deploying autonomous agents in the real world, we need to be confident they will perform safely in novel situations.

4 code implementations • 1 Jul 2019 • Sebastian Farquhar, Michael Osborne, Yarin Gal

The Radial BNN is motivated by avoiding a sampling problem in 'mean-field' variational inference (MFVI) caused by the so-called 'soap-bubble' pathology of multivariate Gaussians.

2 code implementations • NeurIPS 2019 • Andreas Kirsch, Joost van Amersfoort, Yarin Gal

We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning.

no code implementations • 11 Jun 2019 • Jacobo Roa-Vicens, Cyrine Chtourou, Angelos Filos, Francisco Rullan, Yarin Gal, Ricardo Silva

Given the expert agent's demonstrations, we attempt to discover their strategy by modelling their latent reward function using linear and Gaussian process (GP) regressors from previous literature, and our own approach through Bayesian neural networks (BNN).

2 code implementations • 31 May 2019 • Aidan N. Gomez, Ivan Zhang, Siddhartha Rao Kamalakara, Divyam Madaan, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

1 code implementation • 25 May 2019 • Adam D. Cobb, Michael D. Himes, Frank Soboczenski, Simone Zorzan, Molly D. O'Beirne, Atılım Güneş Baydin, Yarin Gal, Shawn D. Domagal-Goldman, Giada N. Arney, Daniel Angerhausen

We expand upon their approach by presenting a new machine learning model, \texttt{plan-net}, based on an ensemble of Bayesian neural networks that yields more accurate inferences than the random forest for the same data set of synthetic transmission spectra.

1 code implementation • 17 May 2019 • Mike Walmsley, Lewis Smith, Chris Lintott, Yarin Gal, Steven Bamford, Hugh Dickinson, Lucy Fortson, Sandor Kruk, Karen Masters, Claudia Scarlata, Brooke Simmons, Rebecca Smethurst, Darryl Wright

We use Bayesian convolutional neural networks and a novel generative model of Galaxy Zoo volunteer responses to infer posteriors for the visual morphology of galaxies.

no code implementations • ICLR 2019 • Yarin Gal, Lewis Smith

Lastly, we demonstrate the defence on a cats-vs-dogs image classification task with a VGG13 variant.

1 code implementation • ICLR 2019 • Milad Alizadeh, Javier Fernández-Marqués, Nicholas D. Lane, Yarin Gal

In this work, we empirically identify and study the effectiveness of the various ad-hoc techniques commonly used in the literature, providing best-practices for efficient training of binary models.

2 code implementations • 18 Feb 2019 • Sebastian Farquhar, Yarin Gal

From a Bayesian perspective, continual learning seems straightforward: Given the model posterior one would simply use this as the prior for the next task.

no code implementations • 18 Feb 2019 • Sebastian Farquhar, Yarin Gal

Catastrophic forgetting can be a significant problem for institutions that must delete historic data for privacy reasons.

1 code implementation • 30 Nov 2018 • Jishnu Mukhoti, Yarin Gal

Deep learning has been revolutionary for computer vision and semantic segmentation in particular, with Bayesian Deep Learning (BDL) used to obtain uncertainty maps from deep models when predicting semantic classes.

1 code implementation • 23 Nov 2018 • Jishnu Mukhoti, Pontus Stenetorp, Yarin Gal

Like all sub-fields of machine learning Bayesian Deep Learning is driven by empirical validation of its theoretical proposals.

no code implementations • 16 Nov 2018 • Rhiannon Michelmore, Marta Kwiatkowska, Yarin Gal

A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains.

no code implementations • 8 Nov 2018 • Frank Soboczenski, Michael D. Himes, Molly D. O'Beirne, Simone Zorzan, Atilim Gunes Baydin, Adam D. Cobb, Yarin Gal, Daniel Angerhausen, Massimo Mascaro, Giada N. Arney, Shawn D. Domagal-Goldman

Here we present an ML-based retrieval framework called Intelligent exoplaNet Atmospheric RetrievAl (INARA) that consists of a Bayesian deep learning model for retrieval and a data set of 3, 000, 000 synthetic rocky exoplanetary spectra generated using the NASA Planetary Spectrum Generator.

1 code implementation • NIPS Workshop CDNNRIA 2018 • Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton

Neural networks are extremely flexible models due to their large number of parameters, which is beneficial for learning, but also highly redundant.

3 code implementations • ICML 2018 • Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, Akash Srivastava

Uncertainty computation in deep learning is essential to design robust and reliable systems.

no code implementations • 2 Jun 2018 • Yarin Gal, Lewis Smith

Lastly, we demonstrate the defence on a cats-vs-dogs image classification task with a VGG13 variant.

no code implementations • 24 May 2018 • Sebastian Farquhar, Yarin Gal

Experiments used in current continual learning research do not faithfully assess fundamental challenges of learning continually.

1 code implementation • 10 May 2018 • Adam D. Cobb, Stephen J. Roberts, Yarin Gal

Current approaches in approximate inference for Bayesian neural networks minimise the Kullback-Leibler divergence to approximate the true posterior over the weights.

2 code implementations • 22 Mar 2018 • Lewis Smith, Yarin Gal

Measuring uncertainty is a promising technique for detecting adversarial examples, crafted inputs on which the model predicts an incorrect class with high confidence.

3 code implementations • NeurIPS 2018 • Iryna Korshunova, Jonas Degrave, Ferenc Huszár, Yarin Gal, Arthur Gretton, Joni Dambre

We present a novel model architecture which leverages deep learning tools to perform exact Bayesian inference on sets of high dimensional, complex observations.

no code implementations • 4 Dec 2017 • Mohammad Emtiyaz Khan, Zuozhu Liu, Voot Tangkaratt, Yarin Gal

Overall, this paper presents Vprop as a principled, computationally-efficient, and easy-to-implement method for Bayesian deep learning.

4 code implementations • NeurIPS 2017 • Yarin Gal, Jiri Hron, Alex Kendall

Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks.

3 code implementations • NeurIPS 2017 • Piotr Dabkowski, Yarin Gal

In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier.

13 code implementations • CVPR 2018 • Alex Kendall, Yarin Gal, Roberto Cipolla

Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.

9 code implementations • NeurIPS 2017 • Alex Kendall, Yarin Gal

On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data.

Ranked #54 on Semantic Segmentation on NYU Depth v2

3 code implementations • ICML 2017 • Yarin Gal, Riashat Islam, Zoubin Ghahramani

In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way.

1 code implementation • ICML 2017 • Yingzhen Li, Yarin Gal

To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed.

15 code implementations • NeurIPS 2016 • Yarin Gal, Zoubin Ghahramani

Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout.

Ranked #34 on Language Modelling on Penn Treebank (Word Level)

no code implementations • 16 Sep 2015 • Hong Ge, Yarin Gal, Zoubin Ghahramani

In this paper, first we review the theory of random fragmentation processes [Bertoin, 2006], and a number of existing methods for modelling trees, including the popular nested Chinese restaurant process (nCRP).

22 code implementations • 6 Jun 2015 • Yarin Gal, Zoubin Ghahramani

In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.

1 code implementation • 6 Jun 2015 • Yarin Gal, Zoubin Ghahramani

We show that a neural network with arbitrary depth and non-linearities, with dropout applied before every weight layer, is mathematically equivalent to an approximation to a well known Bayesian model.

3 code implementations • 6 Jun 2015 • Yarin Gal, Zoubin Ghahramani

Convolutional neural networks (CNNs) work well on large datasets.

1 code implementation • 9 Mar 2015 • Yarin Gal, Richard Turner

We model the covariance function with a finite Fourier series approximation and treat it as a random variable.

1 code implementation • 7 Mar 2015 • Yarin Gal, Yutian Chen, Zoubin Ghahramani

Building on these ideas we propose a Bayesian model for the unsupervised task of distribution estimation of multivariate categorical data.

no code implementations • 28 Feb 2014 • Yarin Gal

Over the past 50 years many have debated what representation should be used to capture the meaning of natural language utterances.

no code implementations • 6 Feb 2014 • Yarin Gal, Mark van der Wilk

In this tutorial we explain the inference procedures developed for the sparse Gaussian process (GP) regression and Gaussian process latent variable model (GPLVM).

1 code implementation • NeurIPS 2014 • Yarin Gal, Mark van der Wilk, Carl E. Rasmussen

We show that GP performance improves with increasing amounts of data in regression (on flight data with 2 million records) and latent variable modelling (on MNIST).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.