Search Results for author: David H. Wolpert

Found 16 papers, 2 papers with code

Game Mining: How to Make Money from those about to Play a Game

no code implementations4 Jan 2024 James W. Bono, David H. Wolpert

It is known that a player in a noncooperative game can benefit by publicly restricting his possible moves before play begins.

What can we know about that which we cannot even imagine?

no code implementations8 Aug 2022 David H. Wolpert

In this essay I will consider a sequence of questions.

The Implications of the No-Free-Lunch Theorems for Meta-induction

no code implementations22 Mar 2021 David H. Wolpert

Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior -- they prove that there are "as many priors" (loosely speaking) for which any induction algorithm $A$ out-generalizes some induction algorithm $B$ as vice-versa.

Thermodynamic Uncertainty Relations for Multipartite Processes

no code implementations5 Jan 2021 Gülce Kardeş, David H. Wolpert

After deriving these extended TURs we use them to obtain bounds that do not involve the global EP, but instead relate the local EPs of the individual systems and the statistical coupling among the currents generated within those systems.

Statistical Mechanics

Noisy Deductive Reasoning: How Humans Construct Math, and How Math Constructs Universes

no code implementations28 Oct 2020 David H. Wolpert, David Kinney

We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process.

Math Mathematical Reasoning

What is important about the No Free Lunch theorems?

no code implementations21 Jul 2020 David H. Wolpert

The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally.

Philosophy

Minimal entropy production due to constraints on rate matrix dependencies in multipartite processes

no code implementations7 Jan 2020 David H. Wolpert

I consider multipartite processes in which there are constraints on each subsystem's rate matrix, restricting which other subsystems can directly affect its dynamics.

counterfactual

Uncertainty relations and fluctuation theorems for Bayes nets

no code implementations7 Nov 2019 David H. Wolpert

I derive fluctuation theorems governing the entropy production (EP)of arbitrary sets of the systems in such a Bayes net.

Upgrading from Gaussian Processes to Student's-T Processes

no code implementations18 Jan 2018 Brendan D. Tracey, David H. Wolpert

The Student's-T distribution has higher Kurtosis than a Gaussian distribution and so outliers are much more likely, and the posterior variance increases or decreases depending on the variance of observed data sample values.

Bayesian Optimization Gaussian Processes

Nonlinear Information Bottleneck

3 code implementations6 May 2017 Artemy Kolchinsky, Brendan D. Tracey, David H. Wolpert

Information bottleneck (IB) is a technique for extracting information in one random variable $X$ that is relevant for predicting another random variable $Y$.

Bayesian Optimization with a Finite Budget: An Approximate Dynamic Programming Approach

no code implementations NeurIPS 2016 Remi Lam, Karen Willcox, David H. Wolpert

We consider the problem of optimizing an expensive objective function when a finite budget of total evaluations is prescribed.

Bayesian Optimization

Reducing the error of Monte Carlo Algorithms by Learning Control Variates

no code implementations7 Jun 2016 Brendan D. Tracey, David H. Wolpert

Crucially, it is a post-processing technique, requiring no additional samples, and can be applied to data generated by any MC estimator.

Optimal high-level descriptions of dynamical systems

no code implementations25 Sep 2014 David H. Wolpert, Joshua A. Grochow, Eric Libby, Simon DeDeo

These include SSC as a measure of the complexity of a dynamical system, and as a way to quantify information flow between the scales of a system.

Vocal Bursts Intensity Prediction

Predicting the behavior of interacting humans by fusing data from multiple sources

no code implementations9 Aug 2014 Erik J. Schlicht, Ritchie Lee, David H. Wolpert, Mykel J. Kochenderfer, Brendan Tracey

Multi-fidelity methods combine inexpensive low-fidelity simulations with costly but highfidelity simulations to produce an accurate model of a system of interest at minimal cost.

Estimating Functions of Probability Distributions from a Finite Set of Samples, Part 1: Bayes Estimators and the Shannon Entropy

1 code implementation8 Mar 1994 David H. Wolpert, David R. Wolf

We present estimators for entropy and other functions of a discrete probability distribution when the data is a finite sample drawn from that probability distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.