Search Results for author: Jonas M. Kübler

Found 8 papers, 6 papers with code

Multi-Armed Bandits and Quantum Channel Oracles

no code implementations20 Jan 2023 Simon Buchholz, Jonas M. Kübler, Bernhard Schölkopf

Here we introduce further bandit models where we only have limited access to the randomness of the rewards, but we can still query the arms in superposition.

Multi-Armed Bandits reinforcement-learning +1

AutoML Two-Sample Test

3 code implementations17 Jun 2022 Jonas M. Kübler, Vincent Stimper, Simon Buchholz, Krikamol Muandet, Bernhard Schölkopf

Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.

AutoML Two-sample testing +1

Quantum machine learning beyond kernel methods

1 code implementation25 Oct 2021 Sofiene Jerbi, Lukas J. Fiderer, Hendrik Poulsen Nautrup, Jonas M. Kübler, Hans J. Briegel, Vedran Dunjko

In this work, we identify a constructive framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models.

BIG-bench Machine Learning Quantum Machine Learning

The Inductive Bias of Quantum Kernels

1 code implementation NeurIPS 2021 Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf

Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.

Inductive Bias Quantum Machine Learning

A Witness Two-Sample Test

1 code implementation10 Feb 2021 Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet

That is, the test set is used to simultaneously estimate the expectations and define the basis points, while the training set only serves to select the kernel and is discarded.

Two-sample testing Vocal Bursts Valence Prediction

Learning Kernel Tests Without Data Splitting

1 code implementation NeurIPS 2020 Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet

Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics.

Quantum Mean Embedding of Probability Distributions

no code implementations31 May 2019 Jonas M. Kübler, Krikamol Muandet, Bernhard Schölkopf

The kernel mean embedding of probability distributions is commonly used in machine learning as an injective mapping from distributions to functions in an infinite dimensional Hilbert space.

Cannot find the paper you are looking for? You can Submit a new open access paper.