Search Results for author: Yaniv Yacoby

Found 7 papers, 0 papers with code

Towards Model-Agnostic Posterior Approximation for Fast and Accurate Variational Autoencoders

no code implementations13 Mar 2024 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

It approximates the posterior of the true model a priori; fixing this posterior approximation, we then maximize the lower bound relative to only the generative model.

Density Estimation

An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks

no code implementations16 Nov 2022 Jiayu Yao, Yaniv Yacoby, Beau Coker, Weiwei Pan, Finale Doshi-Velez

Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable.

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

no code implementations14 Jul 2020 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks.

Adversarial Robustness

BaCOUn: Bayesian Classifers with Out-of-Distribution Uncertainty

no code implementations12 Jul 2020 Théo Guénais, Dimitris Vamvourellis, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan

Traditional training of deep classifiers yields overconfident models that are not reliable under dataset shift.

Bayesian Inference

Uncertainty-Aware (UNA) Bases for Deep Bayesian Regression Using Multi-Headed Auxiliary Networks

no code implementations21 Jun 2020 Sujay Thakur, Cooper Lorsung, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan

Neural Linear Models (NLM) are deep Bayesian models that produce predictive uncertainties by learning features from the data and then performing Bayesian linear regression over these features.

regression

Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

no code implementations pproximateinference AABI Symposium 2019 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e. g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z).

Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables

no code implementations1 Nov 2019 Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez

Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable).

Cannot find the paper you are looking for? You can Submit a new open access paper.