no code implementations • 13 Mar 2024 • Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
It approximates the posterior of the true model a priori; fixing this posterior approximation, we then maximize the lower bound relative to only the generative model.
no code implementations • 16 Nov 2022 • Jiayu Yao, Yaniv Yacoby, Beau Coker, Weiwei Pan, Finale Doshi-Velez
Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable.
no code implementations • 14 Jul 2020 • Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks.
no code implementations • 12 Jul 2020 • Théo Guénais, Dimitris Vamvourellis, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan
Traditional training of deep classifiers yields overconfident models that are not reliable under dataset shift.
no code implementations • 21 Jun 2020 • Sujay Thakur, Cooper Lorsung, Yaniv Yacoby, Finale Doshi-Velez, Weiwei Pan
Neural Linear Models (NLM) are deep Bayesian models that produce predictive uncertainties by learning features from the data and then performing Bayesian linear regression over these features.
no code implementations • pproximateinference AABI Symposium 2019 • Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e. g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z).
no code implementations • 1 Nov 2019 • Yaniv Yacoby, Weiwei Pan, Finale Doshi-Velez
Bayesian Neural Networks with Latent Variables (BNN+LVs) capture predictive uncertainty by explicitly modeling model uncertainty (via priors on network weights) and environmental stochasticity (via a latent input noise variable).