Search Results for author: Peter Grünwald

Found 14 papers, 4 papers with code

Minimax risk classifiers with 0-1 loss

no code implementations17 Jan 2022 Santiago Mazuelas, Mauricio Romero, Peter Grünwald

Supervised classification techniques use training samples to learn a classification rule with small expected 0-1 loss (error probability).

Classification General Classification

Generic E-Variables for Exact Sequential k-Sample Tests that allow for Optional Stopping

1 code implementation4 Jun 2021 Rosanne Turner, Alexander Ly, Peter Grünwald

We develop E-variables for testing whether two or more data streams come from the same source or not, and more generally, whether the difference between the sources is larger than some minimal effect size.

Robust subgroup discovery

2 code implementations25 Mar 2021 Hugo Manuel Proença, Peter Grünwald, Thomas Bäck, Matthijs van Leeuwen

This novel model class allows us to formalise the problem of optimal robust subgroup discovery using the Minimum Description Length (MDL) principle, where we resort to optimal Normalised Maximum Likelihood and Bayesian encodings for nominal and numeric targets, respectively.

Subgroup Discovery

Discovering outstanding subgroup lists for numeric targets using MDL

3 code implementations16 Jun 2020 Hugo M. Proença, Peter Grünwald, Thomas Bäck, Matthijs van Leeuwen

We propose a dispersion-aware problem formulation for subgroup set discovery that is based on the minimum description length (MDL) principle and subgroup lists.

Attribute Subgroup Discovery

Safe-Bayesian Generalized Linear Regression

no code implementations21 Oct 2019 Rianne de Heide, Alisa Kirichenko, Nishant Mehta, Peter Grünwald

We study generalized Bayesian inference under misspecification, i. e. when the model is 'wrong but useful'.

Bayesian Inference regression

Minimum Description Length Revisited

no code implementations21 Aug 2019 Peter Grünwald, Teemu Roos

This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition.

Data Compression Model Selection +1

Safe Testing

1 code implementation18 Jun 2019 Peter Grünwald, Rianne de Heide, Wouter Koolen

We develop the theory of hypothesis testing based on the e-value, a notion of evidence that, unlike the p-value, allows for effortlessly combining results from several studies in the common scenario where the decision to perform a new study may depend on previous outcomes.

Two-sample testing

Optional Stopping with Bayes Factors: a categorization and extension of folklore results, with an application to invariant situations

no code implementations24 Jul 2018 Allard Hendriksen, Rianne de Heide, Peter Grünwald

It is often claimed that Bayesian methods, in particular Bayes factor methods for hypothesis testing, can deal with optional stopping.

Two-sample testing

Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning

no code implementations NeurIPS 2016 Wouter M. Koolen, Peter Grünwald, Tim van Erven

We consider online learning algorithms that guarantee worst-case regret rates in adversarial environments (so they can be deployed safely and will perform robustly), yet adapt optimally to favorable stochastic environments (so they will perform well in a variety of settings of practical importance).

Safe Probability

no code implementations6 Apr 2016 Peter Grünwald

We formalize the idea of probability distributions that lead to reliable predictions about some, but not all aspects of a domain.

Learning the Learning Rate for Prediction with Expert Advice

no code implementations NeurIPS 2014 Wouter M. Koolen, Tim van Erven, Peter Grünwald

Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate.

Mixability in Statistical Learning

no code implementations NeurIPS 2012 Tim V. Erven, Peter Grünwald, Mark D. Reid, Robert C. Williamson

We show that, in the special case of log-loss, stochastic mixability reduces to a well-known (but usually unnamed) martingale condition, which is used in existing convergence theorems for minimum description length and Bayesian inference.

Bayesian Inference

Adaptive Hedge

no code implementations NeurIPS 2011 Tim V. Erven, Wouter M. Koolen, Steven D. Rooij, Peter Grünwald

In most previous analyses the learning rate was carefully tuned to obtain optimal worst-case performance, leading to suboptimal performance on easy instances, for example when there exists an action that is significantly better than all others.

Cannot find the paper you are looking for? You can Submit a new open access paper.