no code implementations • 4 Jan 2024 • James W. Bono, David H. Wolpert
It is known that a player in a noncooperative game can benefit by publicly restricting his possible moves before play begins.
no code implementations • 8 Aug 2022 • David H. Wolpert
In this essay I will consider a sequence of questions.
no code implementations • 11 Dec 2021 • David H. Wolpert, Michael H. Price, Stefani A. Crabtree, Timothy A. Kohler, Jurgen Jost, James Evans, Peter F. Stadler, Hajime Shimao, Manfred D. Laubichler
Historical processes manifest remarkable diversity.
no code implementations • 22 Mar 2021 • David H. Wolpert
Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior -- they prove that there are "as many priors" (loosely speaking) for which any induction algorithm $A$ out-generalizes some induction algorithm $B$ as vice-versa.
no code implementations • 5 Jan 2021 • Gülce Kardeş, David H. Wolpert
After deriving these extended TURs we use them to obtain bounds that do not involve the global EP, but instead relate the local EPs of the individual systems and the statistical coupling among the currents generated within those systems.
Statistical Mechanics
no code implementations • 28 Oct 2020 • David H. Wolpert, David Kinney
We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process.
no code implementations • 21 Jul 2020 • David H. Wolpert
The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally.
no code implementations • 7 Jan 2020 • David H. Wolpert
I consider multipartite processes in which there are constraints on each subsystem's rate matrix, restricting which other subsystems can directly affect its dynamics.
no code implementations • 7 Nov 2019 • David H. Wolpert
I derive fluctuation theorems governing the entropy production (EP)of arbitrary sets of the systems in such a Bayes net.
no code implementations • 18 Jan 2018 • Brendan D. Tracey, David H. Wolpert
The Student's-T distribution has higher Kurtosis than a Gaussian distribution and so outliers are much more likely, and the posterior variance increases or decreases depending on the variance of observed data sample values.
3 code implementations • 6 May 2017 • Artemy Kolchinsky, Brendan D. Tracey, David H. Wolpert
Information bottleneck (IB) is a technique for extracting information in one random variable $X$ that is relevant for predicting another random variable $Y$.
no code implementations • NeurIPS 2016 • Remi Lam, Karen Willcox, David H. Wolpert
We consider the problem of optimizing an expensive objective function when a finite budget of total evaluations is prescribed.
no code implementations • 7 Jun 2016 • Brendan D. Tracey, David H. Wolpert
Crucially, it is a post-processing technique, requiring no additional samples, and can be applied to data generated by any MC estimator.
no code implementations • 25 Sep 2014 • David H. Wolpert, Joshua A. Grochow, Eric Libby, Simon DeDeo
These include SSC as a measure of the complexity of a dynamical system, and as a way to quantify information flow between the scales of a system.
no code implementations • 9 Aug 2014 • Erik J. Schlicht, Ritchie Lee, David H. Wolpert, Mykel J. Kochenderfer, Brendan Tracey
Multi-fidelity methods combine inexpensive low-fidelity simulations with costly but highfidelity simulations to produce an accurate model of a system of interest at minimal cost.
1 code implementation • 8 Mar 1994 • David H. Wolpert, David R. Wolf
We present estimators for entropy and other functions of a discrete probability distribution when the data is a finite sample drawn from that probability distribution.