Search Results for author: Andreas Kirsch

Found 16 papers, 7 papers with code

Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities

no code implementations1 Aug 2022 Andreas Kirsch, Yarin Gal

The mutual information between predictions and model parameters -- also referred to as expected information gain or BALD in machine learning -- measures informativeness.

Active Learning Informativeness

Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling

no code implementations18 May 2022 Andreas Kirsch, Jannik Kossen, Yarin Gal

They are more realistic than previously suggested ones, building on work by Wen et al. (2021) and Osband et al. (2022), and focus on evaluating the performance of approximate BNNs in an online supervised setting.

Active Learning Bayesian Inference +2

A Note on "Assessing Generalization of SGD via Disagreement"

no code implementations3 Feb 2022 Andreas Kirsch, Yarin Gal

Jiang et al. (2021) give empirical evidence that the average test error of deep neural networks can be estimated via the prediction disagreement of two separately trained networks.

Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data

1 code implementation NeurIPS 2021 Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, Yarin Gal

We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects.

Active Learning

Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers

no code implementations22 Jun 2021 Andreas Kirsch, Tom Rainforth, Yarin Gal

Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution of the input variables.

Active Learning

Deep Deterministic Uncertainty: A Simple Baseline

3 code implementations23 Feb 2021 Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.

Active Learning

PowerEvaluationBALD: Efficient Evaluation-Oriented Deep (Bayesian) Active Learning with Stochastic Acquisition Functions

no code implementations10 Jan 2021 Andreas Kirsch, Yarin Gal

We develop BatchEvaluationBALD, a new acquisition function for deep Bayesian active learning, as an expansion of BatchBALD that takes into account an evaluation set of unlabeled data, for example, the pool set.

Active Learning

Unpacking Information Bottlenecks: Surrogate Objectives for Deep Learning

no code implementations1 Jan 2021 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

no code implementations27 Mar 2020 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning

2 code implementations NeurIPS 2019 Andreas Kirsch, Joost van Amersfoort, Yarin Gal

We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning.

Active Learning

MDP environments for the OpenAI Gym

1 code implementation26 Sep 2017 Andreas Kirsch

The OpenAI Gym provides researchers and enthusiasts with simple to use environments for reinforcement learning.

OpenAI Gym

Cannot find the paper you are looking for? You can Submit a new open access paper.