no code implementations • 8 Jul 2023 • Aditya Gupta, Shiva Maharaj, Nicholas Polson, Vadim Sokolov
We propose a neural network-based approach to calculate the value of a chess square-piece combination.
no code implementations • 31 Dec 2022 • Maria Nareklishvili, Nicholas Polson, Vadim Sokolov
In particular, our method is able to capture policy effect heterogeneity both within and across subgroups of the population defined by observable characteristics.
no code implementations • 5 Oct 2021 • Shiva Maharaj, Nicholas Polson, Christian Turk
This allows us to calculate the $Q$-values of a Gambit where material (usually a pawn) is sacrificed for dynamic play.
no code implementations • 3 Jun 2021 • Jingyu He, Nicholas Polson, Jianeng Xu
We use the theory of normal variance-mean mixtures to derive a data augmentation scheme for models that include gamma functions.
no code implementations • 26 Aug 2018 • Nicholas Polson, Vadim Sokolov
In this article we review computational aspects of Deep Learning (DL).
no code implementations • NeurIPS 2018 • Nicholas Polson, Veronika Rockova
As an aside, we show that SS-DL does not overfit in the sense that the posterior concentrates on smaller networks with fewer (up to the optimal number of) nodes and links.
no code implementations • 1 Sep 2017 • Guanhao Feng, Nicholas Polson, Yuexi Wang, Jianeng Xu
Alpha-norm, in contrast to lasso and ridge regularization, jumps to a sparse solution.
no code implementations • 1 Jun 2017 • Nicholas Polson, Vadim Sokolov
Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction.