Search Results for author: Eric Nalisnick

Found 37 papers, 18 papers with code

Uncertainty Aware Tropical Cyclone Wind Speed Estimation from Satellite Data

2 code implementations12 Apr 2024 Nils Lehmann, Nina Maria Gottschling, Stefan Depeweg, Eric Nalisnick

We provide a detailed evaluation of predictive uncertainty estimates from state-of-the-art uncertainty quantification (UQ) methods for DNNs.

Decision Making Earth Observation +1

Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction

no code implementations12 Mar 2024 Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick

In particular, we leverage conformal prediction to obtain uncertainty intervals with guaranteed coverage for object bounding boxes.

Autonomous Driving Conformal Prediction +3

Learning to Defer to a Population: A Meta-Learning Approach

1 code implementation5 Mar 2024 Dharmesh Tailor, Aditya Patra, Rajeev Verma, Putra Manggala, Eric Nalisnick

The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert.

Meta-Learning Traffic Sign Detection

A Generative Model of Symmetry Transformations

no code implementations4 Mar 2024 James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato

Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.

Beyond Top-Class Agreement: Using Divergences to Forecast Performance under Distribution Shift

no code implementations13 Dec 2023 Mona Schirmer, Dan Zhang, Eric Nalisnick

Knowing if a model will generalize to data 'in the wild' is crucial for safe deployment.

A powerful rank-based correction to multiple testing under positive dependency

no code implementations17 Nov 2023 Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick

We develop a novel multiple hypothesis testing correction with family-wise error rate (FWER) control that efficiently exploits positive dependencies between potentially correlated statistical hypothesis tests.

Conformal Prediction

Active Learning for Multilingual Fingerspelling Corpora

no code implementations21 Sep 2023 Shuai Wang, Eric Nalisnick

We apply active learning to help with data scarcity problems in sign languages.

Active Learning

Exploiting Inferential Structure in Neural Processes

1 code implementation27 Jun 2023 Dharmesh Tailor, Mohammad Emtiyaz Khan, Eric Nalisnick

Neural Processes (NPs) are appealing due to their ability to perform fast adaptation based on a context set.

Do Bayesian Neural Networks Need To Be Fully Stochastic?

2 code implementations11 Nov 2022 Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, Tom Rainforth

We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary.

Sampling-based inference for large linear models, with application to linearised Laplace

1 code implementation10 Oct 2022 Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato

Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.

Bayesian Inference Uncertainty Quantification

Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions

no code implementations NAACL (WOAH) 2022 Urja Khurana, Ivar Vermeulen, Eric Nalisnick, Marloes van Noorloos, Antske Fokkens

We argue that the goal and exact task developers have in mind should determine how the scope of \textit{hate speech} is defined.

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

no code implementations17 Jun 2022 Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato

The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.

Model Selection

Adversarial Defense via Image Denoising with Chaotic Encryption

no code implementations19 Mar 2022 Shi Hu, Eric Nalisnick, Max Welling

In the literature on adversarial examples, white box and black box attacks have received the most attention.

Adversarial Defense Image Denoising

Calibrated Learning to Defer with One-vs-All Classifiers

1 code implementation8 Feb 2022 Rajeev Verma, Eric Nalisnick

We find that Mozannar & Sontag's (2020) multiclass framework is not calibrated with respect to expert correctness.

Hate Speech Detection valid

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection +1

Bayesian Deep Learning via Subnetwork Inference

1 code implementation28 Oct 2020 Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato

In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.

Bayesian Inference

Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference

no code implementations pproximateinference AABI Symposium 2021 Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato

In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.

Bayesian Inference

Predictive Complexity Priors

no code implementations18 Jun 2020 Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato

For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.

Few-Shot Learning

Normalizing Flows for Probabilistic Modeling and Inference

6 code implementations5 Dec 2019 George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan

In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.

Bayesian Batch Active Learning as Sparse Subset Approximation

2 code implementations NeurIPS 2019 Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato

Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.

Active Learning

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

2 code implementations7 Jun 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan

To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods.

Hybrid Models with Deep and Invertible Features

1 code implementation7 Feb 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i. e. a normalizing flow).

Probabilistic Deep Learning

Do Deep Generative Models Know What They Don't Know?

4 code implementations ICLR 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data.

Dropout as a Structured Shrinkage Prior

1 code implementation9 Oct 2018 Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth

We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).

Bayesian Inference

Mondrian Processes for Flow Cytometry Analysis

no code implementations21 Nov 2017 Disi Ji, Eric Nalisnick, Padhraic Smyth

Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions.

Uncertainty Quantification

Learning Approximately Objective Priors

no code implementations4 Apr 2017 Eric Nalisnick, Padhraic Smyth

Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors.

Stick-Breaking Variational Autoencoders

2 code implementations20 May 2016 Eric Nalisnick, Padhraic Smyth

We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes.

A Dual Embedding Space Model for Document Ranking

no code implementations2 Feb 2016 Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana

A fundamental goal of search engines is to identify, given a query, documents that have relevant text.

Document Ranking Word Embeddings

Learning the Dimensionality of Word Embeddings

no code implementations17 Nov 2015 Eric Nalisnick, Sachin Ravi

We describe a method for learning word embeddings with data-dependent dimensionality.

Learning Word Embeddings

A Scale Mixture Perspective of Multiplicative Noise in Neural Networks

no code implementations10 Jun 2015 Eric Nalisnick, Anima Anandkumar, Padhraic Smyth

Corrupting the input and hidden layers of deep neural networks (DNNs) with multiplicative noise, often drawn from the Bernoulli distribution (or 'dropout'), provides regularization that has significantly contributed to deep learning's success.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.