Search Results for author: Andreas Kirsch

Found 22 papers, 12 papers with code

Does `Deep Learning on a Data Diet' reproduce? Overall yes, but GraNd at Initialization does not

1 code implementation26 Mar 2023 Andreas Kirsch

Unfortunately, neither the GraNd score at initialization nor the input norm surpasses random pruning in performance.

Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning

2 code implementations22 Jun 2021 Andreas Kirsch, Sebastian Farquhar, Parmida Atighehchian, Andrew Jesson, Frederic Branchaud-Charron, Yarin Gal

We examine a simple stochastic strategy for adapting well-known single-point acquisition functions to allow batch active learning.

Active Learning

BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning

3 code implementations NeurIPS 2019 Andreas Kirsch, Joost van Amersfoort, Yarin Gal

We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning.

Active Learning

Deep Deterministic Uncertainty: A Simple Baseline

4 code implementations23 Feb 2021 Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.

Active Learning Uncertainty Quantification

MDP environments for the OpenAI Gym

1 code implementation26 Sep 2017 Andreas Kirsch

The OpenAI Gym provides researchers and enthusiasts with simple to use environments for reinforcement learning.

OpenAI Gym reinforcement-learning +1

Prediction-Oriented Bayesian Active Learning

1 code implementation17 Apr 2023 Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, Tom Rainforth

Information-theoretic approaches to active learning have traditionally focused on maximising the information gathered about the model parameters, most commonly by optimising the BALD score.

Active Learning

Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data

2 code implementations NeurIPS 2021 Andrew Jesson, Panagiotis Tigas, Joost van Amersfoort, Andreas Kirsch, Uri Shalit, Yarin Gal

We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects.

Active Learning

Black-Box Batch Active Learning for Regression

1 code implementation17 Feb 2023 Andreas Kirsch

This approach is compatible with a wide range of machine learning models, including regular and Bayesian deep learning models and non-differentiable models such as random forests.

Active Learning regression

A Note on "Assessing Generalization of SGD via Disagreement"

1 code implementation3 Feb 2022 Andreas Kirsch, Yarin Gal

Several recent works find empirically that the average test error of deep neural networks can be estimated via the prediction disagreement of models, which does not require labels.

Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities

1 code implementation1 Aug 2022 Andreas Kirsch, Yarin Gal

Recently proposed methods in data subset selection, that is active learning and active sampling, use Fisher information, Hessians, similarity matrices based on gradients, and gradient lengths to estimate how informative data is for a model's training.

Active Learning Informativeness

Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

no code implementations27 Mar 2020 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

Unpacking Information Bottlenecks: Surrogate Objectives for Deep Learning

no code implementations1 Jan 2021 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

PowerEvaluationBALD: Efficient Evaluation-Oriented Deep (Bayesian) Active Learning with Stochastic Acquisition Functions

no code implementations10 Jan 2021 Andreas Kirsch, Yarin Gal

We develop BatchEvaluationBALD, a new acquisition function for deep Bayesian active learning, as an expansion of BatchBALD that takes into account an evaluation set of unlabeled data, for example, the pool set.

Active Learning

Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers

no code implementations22 Jun 2021 Andreas Kirsch, Tom Rainforth, Yarin Gal

Expanding on MacKay (1992), we argue that conventional model-based methods for active learning - like BALD - have a fundamental shortfall: they fail to directly account for the test-time distribution of the input variables.

Active Learning

Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling

no code implementations18 May 2022 Andreas Kirsch, Jannik Kossen, Yarin Gal

They are more realistic than previously suggested ones, building on work by Wen et al. (2021) and Osband et al. (2022), and focus on evaluating the performance of approximate BNNs in an online supervised setting.

Active Learning Bayesian Inference +1

Speeding Up BatchBALD: A k-BALD Family of Approximations for Active Learning

no code implementations23 Jan 2023 Andreas Kirsch

One commonly used technique for active learning is BatchBALD, which uses Bayesian neural networks to find the most informative points to label in a pool set.

Active Learning

Deep Deterministic Uncertainty: A New Simple Baseline

no code implementations CVPR 2023 Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H.S. Torr, Yarin Gal

Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.

Active Learning Semantic Segmentation +1

Advancing Deep Active Learning & Data Subset Selection: Unifying Principles with Information-Theory Intuitions

no code implementations9 Jan 2024 Andreas Kirsch

At its core, this thesis aims to enhance the practicality of deep learning by improving the label and training efficiency of deep learning models.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.