no code implementations • 13 Jun 2022 • Marc Jourdan, Rémy Degenne, Dorian Baudry, Rianne de Heide, Emilie Kaufmann
Top Two algorithms arose as an adaptation of Thompson sampling to best arm identification in multi-armed bandit models (Russo, 2016), for parametric families of arms.
no code implementations • 31 May 2022 • Hidde Fokkema, Rianne de Heide, Tim van Erven
Finally, we strengthen our impossibility result for the restricted case where users are only able to change a single attribute of $x$, by providing an exact characterization of the functions $f$ to which impossibility applies.
no code implementations • NeurIPS 2021 • Rianne de Heide, James Cheshire, Pierre Ménard, Alexandra Carpentier
We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$.
no code implementations • 24 Oct 2019 • Xuedong Shang, Rianne de Heide, Emilie Kaufmann, Pierre Ménard, Michal Valko
We investigate and provide new insights on the sampling rule called Top-Two Thompson Sampling (TTTS).
no code implementations • 21 Oct 2019 • Rianne de Heide, Alisa Kirichenko, Nishant Mehta, Peter Grünwald
We study generalized Bayesian inference under misspecification, i. e. when the model is 'wrong but useful'.
1 code implementation • 18 Jun 2019 • Peter Grünwald, Rianne de Heide, Wouter Koolen
We develop the theory of hypothesis testing based on the e-value, a notion of evidence that, unlike the p-value, allows for effortlessly combining results from several studies in the common scenario where the decision to perform a new study may depend on previous outcomes.
no code implementations • 24 Jul 2018 • Allard Hendriksen, Rianne de Heide, Peter Grünwald
It is often claimed that Bayesian methods, in particular Bayes factor methods for hypothesis testing, can deal with optional stopping.