no code implementations • 16 Nov 2022 • Jiayu Yao, Yaniv Yacoby, Beau Coker, Weiwei Pan, Finale Doshi-Velez
Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable.
no code implementations • 15 Apr 2022 • Wenying Deng, Beau Coker, Rajarshi Mukherjee, Jeremiah Zhe Liu, Brent A. Coull
We develop a simple and unified framework for nonlinear variable selection that incorporates uncertainty in the prediction function and is compatible with a wide range of machine learning models (e. g., tree ensembles, kernel methods, neural networks, etc).
1 code implementation • 23 Feb 2022 • Beau Coker, Wessel P. Bruinsma, David R. Burt, Weiwei Pan, Finale Doshi-Velez
Finally, we show that the optimal approximate posterior need not tend to the prior if the activation function is not odd, showing that our statements cannot be generalized arbitrarily.
no code implementations • 13 Jun 2021 • Beau Coker, Weiwei Pan, Finale Doshi-Velez
Variational inference enables approximate posterior inference of the highly over-parameterized neural networks that are popular in modern machine learning.
no code implementations • 12 Dec 2019 • Beau Coker, Melanie F. Pradier, Finale Doshi-Velez
While Bayesian neural networks have many appealing characteristics, current priors do not easily allow users to specify basic properties such as expected lengthscale or amplitude variance.
1 code implementation • 17 Jan 2019 • Benjamin Kompa, Beau Coker
We demonstrate that our interpolations learn relevant metagenes that recapitulate known glioblastoma mechanisms and suggest possible starting points for investigations into the metastasis of SKCM into GBM.
2 code implementations • 23 Apr 2018 • Beau Coker, Cynthia Rudin, Gary King
We introduce hacking intervals, which are the range of a summary statistic one may obtain given a class of possible endogenous manipulations of the data.