no code implementations • 2 Nov 2023 • Kensuke Mitsuzawa, Motonobu Kanagawa, Stefano Bortoli, Margherita Grossi, Paolo Papotti
For this optimisation, we introduce sparse regularisation and propose two methods for dealing with the issue of selecting an appropriate regularisation parameter.
no code implementations • 7 Mar 2023 • Davit Gogolashvili, Matteo Zecchin, Motonobu Kanagawa, Marios Kountouris, Maurizio Filippone
Classic results show that the IW correction is needed when the model is parametric and misspecified.
no code implementations • 21 Jan 2022 • Jonas Wacker, Motonobu Kanagawa, Maurizio Filippone
These variance formulas elucidate conditions under which certain approximations (e. g., TensorSRHT) achieve lower variances than others (e. g., Rademacher sketches), and conditions under which the use of complex features leads to lower variances than real features.
no code implementations • 25 Jun 2021 • An Chen, Motonobu Kanagawa, Fangyuan Zhang
We study a fully funded, collective defined-contribution (DC) pension system with multiple overlapping generations.
no code implementations • 2 Jun 2021 • Veit Wild, Motonobu Kanagawa, Dino Sejdinovic
We investigate the connections between sparse approximation methods for making kernel methods and Gaussian processes (GPs) scalable to large-scale data, focusing on the Nystr\"om method and the Sparse Variational Gaussian Processes (SVGP).
no code implementations • NeurIPS 2019 • Motonobu Kanagawa, Philipp Hennig
Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive).
no code implementations • 7 Feb 2019 • Takafumi Kajihara, Motonobu Kanagawa, Yuuki Nakaguchi, Kanishka Khandelwal, Kenji Fukumiziu
We propose a novel approach to model selection for simulator-based statistical models.
no code implementations • 21 Sep 2018 • Keiichi Kisamori, Motonobu Kanagawa, Keisuke Yamazaki
We propose a novel calibration method for computer simulators, dealing with the problem of covariate shift.
no code implementations • 6 Jul 2018 • Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, Bharath K. Sriperumbudur
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other.
no code implementations • 22 May 2018 • Krikamol Muandet, Motonobu Kanagawa, Sorawit Saengkyongam, Sanparith Marukatat
In this work, we propose to model counterfactual distributions using a novel Hilbert space representation called counterfactual mean embedding (CME).
no code implementations • ICML 2018 • Takafumi Kajihara, Motonobu Kanagawa, Keisuke Yamazaki, Kenji Fukumizu
We propose a novel approach to parameter estimation for simulator-based statistical models with intractable likelihood.
1 code implementation • 1 Sep 2017 • Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu
This paper presents a convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces.
1 code implementation • 23 Jul 2017 • Damien Garreau, Wittawat Jitkrittum, Motonobu Kanagawa
In kernel methods, the median heuristic has been widely used as a way of setting the bandwidth of RBF kernels.
no code implementations • NeurIPS 2016 • Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu
Kernel-based quadrature rules are becoming important in machine learning and statistics, as they achieve super-$\sqrt{n}$ convergence rates in numerical integration, and thus provide alternatives to Monte Carlo integration in challenging settings where integrands are expensive to evaluate or where integrands are high dimensional.
no code implementations • 18 Sep 2014 • Yu Nishiyama, Motonobu Kanagawa, Arthur Gretton, Kenji Fukumizu
Our contribution in this paper is to introduce a novel approach, termed the {\em model-based kernel sum rule} (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference.
no code implementations • 17 Dec 2013 • Motonobu Kanagawa, Yu Nishiyama, Arthur Gretton, Kenji Fukumizu
In particular, the sampling and resampling procedures are novel in being expressed using kernel mean embeddings, so we theoretically analyze their behaviors.