Search Results for author: Anindya Bhadra

Found 7 papers, 2 papers with code

Posterior Inference on Shallow Infinitely Wide Bayesian Neural Networks under Weights with Unbounded Variance

1 code implementation18 May 2023 Jorge Loría, Anindya Bhadra

From the classical and influential works of Neal (1996), it is known that the infinite width scaling limit of a Bayesian neural network with one hidden layer is a Gaussian process, \emph{when the network weights have bounded prior variance}.

Gaussian Processes Uncertainty Quantification

Merging Two Cultures: Deep and Statistical Learning

no code implementations22 Oct 2021 Anindya Bhadra, Jyotishka Datta, Nick Polson, Vadim Sokolov, Jianeng Xu

We show that prediction, interpolation and uncertainty quantification can be achieved using probabilistic methods at the output layer of the model.

Dimensionality Reduction Feature Engineering +2

Beyond Matérn: On A Class of Interpretable Confluent Hypergeometric Covariance Functions

no code implementations14 Nov 2019 Pulong Ma, Anindya Bhadra

A key benefit of the Mat\'ern class is that it is possible to get precise control over the degree of mean-square differentiability of the random process.

Uncertainty Quantification

Horseshoe Regularization for Machine Learning in Complex and Deep Models

no code implementations24 Apr 2019 Anindya Bhadra, Jyotishka Datta, Yunfan Li, Nicholas G. Polson

We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.

BIG-bench Machine Learning regression

Divide and Recombine for Large and Complex Data: Model Likelihood Functions using MCMC

no code implementations15 Jan 2018 Qi Liu, Anindya Bhadra, William S. Cleveland

The density parameters are estimated by fitting the density to MCMC draws from each subset DM likelihood function, and then the fitted densities are recombined.

regression

Lasso Meets Horseshoe

1 code implementation30 Jun 2017 Anindya Bhadra, Jyotishka Datta, Nicholas G. Polson, Brandon T. Willard

The goal of our paper is to survey and contrast the major advances in two of the most commonly used high-dimensional techniques, namely, the Lasso and horseshoe regularization methodologies.

Methodology Primary 62J07, 62J05, Secondary 62H15, 62F03

Horseshoe Regularization for Feature Subset Selection

no code implementations23 Feb 2017 Anindya Bhadra, Jyotishka Datta, Nicholas G. Polson, Brandon Willard

Feature subset selection arises in many high-dimensional applications of statistics, such as compressed sensing and genomics.

Uncertainty Quantification

Cannot find the paper you are looking for? You can Submit a new open access paper.