You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 19 Mar 2022 • Shi Hu, Eric Nalisnick, Max Welling

In the literature on adversarial examples, white box and black box attacks have received the most attention.

no code implementations • 8 Feb 2022 • Rajeev Verma, Eric Nalisnick

We find that Mozannar & Sontag's (2020) multiclass framework is not calibrated with respect to expert correctness.

no code implementations • pproximateinference AABI Symposium 2022 • Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

1 code implementation • EMNLP (Eval4NLP) 2021 • Urja Khurana, Eric Nalisnick, Antske Fokkens

Despite their success, modern language models are fragile.

no code implementations • pproximateinference AABI Symposium 2021 • Yijie Zhang, Eric Nalisnick

Grunwald and Van Ommen (2017) show that Bayesian inference for linear regression can be inconsistent under model misspecification.

2 code implementations • 28 Oct 2020 • Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato

In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.

no code implementations • pproximateinference AABI Symposium 2021 • Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato

In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.

no code implementations • 18 Jun 2020 • Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato

For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.

5 code implementations • 5 Dec 2019 • George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan

In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.

1 code implementation • NeurIPS 2019 • Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato

Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.

2 code implementations • 7 Jun 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan

To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods.

1 code implementation • 7 Feb 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i. e. a normalizing flow).

4 code implementations • ICLR 2019 • Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data.

1 code implementation • 9 Oct 2018 • Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth

We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).

no code implementations • ICLR 2018 • Oleg Rybakov, Vijai Mohan, Avishkar Misra, Scott LeGrand, Rejith Joseph, Kiuk Chung, Siddharth Singh, Qian You, Eric Nalisnick, Leo Dirac, Runfei Luo

We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music.

no code implementations • 21 Nov 2017 • Disi Ji, Eric Nalisnick, Padhraic Smyth

Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions.

no code implementations • 4 Apr 2017 • Eric Nalisnick, Padhraic Smyth

Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors.

1 code implementation • 20 May 2016 • Eric Nalisnick, Padhraic Smyth

We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes.

no code implementations • 2 Feb 2016 • Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana

A fundamental goal of search engines is to identify, given a query, documents that have relevant text.

no code implementations • 17 Nov 2015 • Eric Nalisnick, Sachin Ravi

We describe a method for learning word embeddings with data-dependent dimensionality.

no code implementations • 10 Jun 2015 • Eric Nalisnick, Anima Anandkumar, Padhraic Smyth

Corrupting the input and hidden layers of deep neural networks (DNNs) with multiplicative noise, often drawn from the Bernoulli distribution (or 'dropout'), provides regularization that has significantly contributed to deep learning's success.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.