no code implementations • 18 Dec 2024 • Rajeev Verma, Volker Fischer, Eric Nalisnick
Modern challenges of robustness, fairness, and decision-making in machine learning have led to the formulation of multi-distribution learning (MDL) frameworks in which a predictor is optimized across multiple distributions.
1 code implementation • 30 Oct 2024 • Ola Rønning, Eric Nalisnick, Christophe Ley, Padhraic Smyth, Thomas Hamelryck
Stein variational gradient descent (SVGD) [Liu and Wang, 2016] performs approximate Bayesian inference by representing the posterior with a set of particles.
1 code implementation • 21 Oct 2024 • Urja Khurana, Eric Nalisnick, Antske Fokkens
To address this issue for hate speech detection, we propose DefVerify: a 3-step procedure that (i) encodes a user-specified definition of hate speech, (ii) quantifies to what extent the model reflects the intended definition, and (iii) tries to identify the point of failure in the workflow.
1 code implementation • 4 Oct 2024 • Nils Lehmann, Jakob Gawlikowski, Adam J. Stewart, Vytautas Jancauskas, Stefan Depeweg, Eric Nalisnick, Nina Maria Gottschling
Uncertainty quantification (UQ) is an essential tool for applying deep neural networks (DNNs) to real world tasks, as it attaches a degree of confidence to DNN outputs.
no code implementations • 26 Aug 2024 • Urja Khurana, Eric Nalisnick, Antske Fokkens, Swabha Swayamdipta
Subjective tasks in NLP have been mostly relegated to objective standards, where the gold label is decided by taking the majority vote.
no code implementations • 17 Jul 2024 • Mona Schirmer, Dan Zhang, Eric Nalisnick
Distribution shifts between training and test data are inevitable over the lifecycle of a deployed model, leading to performance decay.
1 code implementation • 31 May 2024 • Metod Jazbec, Alexander Timans, Tin Hadži Veljković, Kaspar Sakmann, Dan Zhang, Christian A. Naesseth, Eric Nalisnick
Scaling machine learning models significantly improves their performance.
1 code implementation • 12 Apr 2024 • Nils Lehmann, Nina Maria Gottschling, Stefan Depeweg, Eric Nalisnick
We provide a detailed evaluation of predictive uncertainty estimates from state-of-the-art uncertainty quantification (UQ) methods for DNNs.
1 code implementation • 12 Mar 2024 • Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick
Thus, we develop a novel two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals of bounding boxes.
1 code implementation • 5 Mar 2024 • Dharmesh Tailor, Aditya Patra, Rajeev Verma, Putra Manggala, Eric Nalisnick
The learning to defer (L2D) framework allows autonomous systems to be safe and robust by allocating difficult decisions to a human expert.
1 code implementation • 4 Mar 2024 • James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato
Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.
no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
The field of deep generative modeling has grown rapidly and consistently over the years.
no code implementations • 13 Dec 2023 • Mona Schirmer, Dan Zhang, Eric Nalisnick
Knowing if a model will generalize to data 'in the wild' is crucial for safe deployment.
1 code implementation • 24 Nov 2023 • Thomas Jurriaans, Kinga Szarkowska, Eric Nalisnick, Markus Schwoerer, Camilo Thorne, Saber Akhondi
The focus of this research was to propose and test a novel method for classifying Markush structures.
no code implementations • 17 Nov 2023 • Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Christian A. Naesseth, Eric Nalisnick
Multiple hypothesis testing (MHT) commonly arises in various scientific fields, from genomics to psychology, where testing many hypotheses simultaneously increases the risk of Type-I errors.
no code implementations • 10 Nov 2023 • Metod Jazbec, Patrick Forré, Stephan Mandt, Dan Zhang, Eric Nalisnick
These sequences are inherently nested and thus well-suited for an EENN's sequential predictions.
no code implementations • 21 Sep 2023 • Shuai Wang, Eric Nalisnick
We apply active learning to help with data scarcity problems in sign languages.
1 code implementation • 27 Jun 2023 • Dharmesh Tailor, Mohammad Emtiyaz Khan, Eric Nalisnick
Neural Processes (NPs) are appealing due to their ability to perform fast adaptation based on a context set.
2 code implementations • 11 Nov 2022 • Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, Tom Rainforth
We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary.
1 code implementation • 30 Oct 2022 • Rajeev Verma, Daniel Barrejón, Eric Nalisnick
We study the statistical properties of learning to defer (L2D) to multiple experts.
1 code implementation • 10 Oct 2022 • Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato
Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.
no code implementations • NAACL (WOAH) 2022 • Urja Khurana, Ivar Vermeulen, Eric Nalisnick, Marloes van Noorloos, Antske Fokkens
We argue that the goal and exact task developers have in mind should determine how the scope of \textit{hate speech} is defined.
no code implementations • 17 Jun 2022 • Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato
The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.
no code implementations • 19 Mar 2022 • Shi Hu, Eric Nalisnick, Max Welling
In the literature on adversarial examples, white box and black box attacks have received the most attention.
1 code implementation • 8 Feb 2022 • Rajeev Verma, Eric Nalisnick
We find that Mozannar & Sontag's (2020) multiclass framework is not calibrated with respect to expert correctness.
no code implementations • pproximateinference AABI Symposium 2022 • Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato
We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.
1 code implementation • EMNLP (Eval4NLP) 2021 • Urja Khurana, Eric Nalisnick, Antske Fokkens
Despite their success, modern language models are fragile.
no code implementations • pproximateinference AABI Symposium 2021 • Yijie Zhang, Eric Nalisnick
Grunwald and Van Ommen (2017) show that Bayesian inference for linear regression can be inconsistent under model misspecification.
1 code implementation • 28 Oct 2020 • Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato
In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.
no code implementations • pproximateinference AABI Symposium 2021 • Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato
In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.
no code implementations • 18 Jun 2020 • Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato
For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.
6 code implementations • 5 Dec 2019 • George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan
In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.
2 code implementations • NeurIPS 2019 • Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato
Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.
2 code implementations • 7 Jun 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan
To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods.
1 code implementation • 7 Feb 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan
We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i. e. a normalizing flow).
4 code implementations • ICLR 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data.
1 code implementation • 9 Oct 2018 • Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth
We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).
no code implementations • ICLR 2018 • Oleg Rybakov, Vijai Mohan, Avishkar Misra, Scott LeGrand, Rejith Joseph, Kiuk Chung, Siddharth Singh, Qian You, Eric Nalisnick, Leo Dirac, Runfei Luo
We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music.
no code implementations • 21 Nov 2017 • Disi Ji, Eric Nalisnick, Padhraic Smyth
Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions.
no code implementations • 4 Apr 2017 • Eric Nalisnick, Padhraic Smyth
Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors.
2 code implementations • 20 May 2016 • Eric Nalisnick, Padhraic Smyth
We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes.
no code implementations • 2 Feb 2016 • Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana
A fundamental goal of search engines is to identify, given a query, documents that have relevant text.
no code implementations • 17 Nov 2015 • Eric Nalisnick, Sachin Ravi
We describe a method for learning word embeddings with data-dependent dimensionality.
no code implementations • 10 Jun 2015 • Eric Nalisnick, Anima Anandkumar, Padhraic Smyth
Corrupting the input and hidden layers of deep neural networks (DNNs) with multiplicative noise, often drawn from the Bernoulli distribution (or 'dropout'), provides regularization that has significantly contributed to deep learning's success.