no code implementations • 28 Aug 2022 • Joshua C. Chang, Ted L. Chang, Carson C. Chow, Rohit Mahajan, Sonya Mahajan, Joe Maisog, Shashaank Vattikuti, Hongjing Xia
We developed an inherently interpretable multilevel Bayesian framework for representing variation in regression coefficients that mimics the piecewise linearity of ReLU-activated deep neural networks.
1 code implementation • ICLR 2021 • Joshua C. Chang, Patrick Fletcher, Jungmin Han, Ted L. Chang, Shashaank Vattikuti, Bart Desmet, Ayah Zirikly, Carson C. Chow
However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features.
1 code implementation • 5 Dec 2019 • Joshua C. Chang, Shashaank Vattikuti, Carson C. Chow
By binding the generative IRT model to a Bayesian neural network (forming a probabilistic autoencoder), one obtains a scoring algorithm consistent with the interpretable Bayesian model.