Search Results for author: Eric Nalisnick

Found 21 papers, 9 papers with code

Adversarial Defense via Image Denoising with Chaotic Encryption

no code implementations19 Mar 2022 Shi Hu, Eric Nalisnick, Max Welling

In the literature on adversarial examples, white box and black box attacks have received the most attention.

Adversarial Defense Image Denoising

Calibrated Learning to Defer with One-vs-All Classifiers

no code implementations8 Feb 2022 Rajeev Verma, Eric Nalisnick

We find that Mozannar & Sontag's (2020) multiclass framework is not calibrated with respect to expert correctness.

Hate Speech Detection

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection

Bayesian Deep Learning via Subnetwork Inference

2 code implementations28 Oct 2020 Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato

In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.

Bayesian Inference

Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference

no code implementations pproximateinference AABI Symposium 2021 Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato

In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.

Bayesian Inference

Predictive Complexity Priors

no code implementations18 Jun 2020 Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato

For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.

Few-Shot Learning

Normalizing Flows for Probabilistic Modeling and Inference

5 code implementations5 Dec 2019 George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan

In this review, we attempt to provide such a perspective by describing flows through the lens of probabilistic modeling and inference.

Bayesian Batch Active Learning as Sparse Subset Approximation

1 code implementation NeurIPS 2019 Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato

Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.

Active Learning

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

2 code implementations7 Jun 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan

To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods.

Hybrid Models with Deep and Invertible Features

1 code implementation7 Feb 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i. e. a normalizing flow).

Probabilistic Deep Learning

Do Deep Generative Models Know What They Don't Know?

4 code implementations ICLR 2019 Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data.

Dropout as a Structured Shrinkage Prior

1 code implementation9 Oct 2018 Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth

We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).

Bayesian Inference

THE EFFECTIVENESS OF A TWO-LAYER NEURAL NETWORK FOR RECOMMENDATIONS

no code implementations ICLR 2018 Oleg Rybakov, Vijai Mohan, Avishkar Misra, Scott LeGrand, Rejith Joseph, Kiuk Chung, Siddharth Singh, Qian You, Eric Nalisnick, Leo Dirac, Runfei Luo

We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books, Mobile Apps, Video and Music.

Recommendation Systems

Mondrian Processes for Flow Cytometry Analysis

no code implementations21 Nov 2017 Disi Ji, Eric Nalisnick, Padhraic Smyth

Analysis of flow cytometry data is an essential tool for clinical diagnosis of hematological and immunological conditions.

Learning Approximately Objective Priors

no code implementations4 Apr 2017 Eric Nalisnick, Padhraic Smyth

Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors.

Stick-Breaking Variational Autoencoders

1 code implementation20 May 2016 Eric Nalisnick, Padhraic Smyth

We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes.

A Dual Embedding Space Model for Document Ranking

no code implementations2 Feb 2016 Bhaskar Mitra, Eric Nalisnick, Nick Craswell, Rich Caruana

A fundamental goal of search engines is to identify, given a query, documents that have relevant text.

Document Ranking Word Embeddings

Learning the Dimensionality of Word Embeddings

no code implementations17 Nov 2015 Eric Nalisnick, Sachin Ravi

We describe a method for learning word embeddings with data-dependent dimensionality.

Learning Word Embeddings

A Scale Mixture Perspective of Multiplicative Noise in Neural Networks

no code implementations10 Jun 2015 Eric Nalisnick, Anima Anandkumar, Padhraic Smyth

Corrupting the input and hidden layers of deep neural networks (DNNs) with multiplicative noise, often drawn from the Bernoulli distribution (or 'dropout'), provides regularization that has significantly contributed to deep learning's success.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.