no code implementations • 10 Nov 2022 • Heishiro Kanagawa, Alessandro Barp, Arthur Gretton, Lester Mackey
Kernel Stein discrepancies (KSDs) measure the quality of a distributional approximation and can be computed even when the target density has an intractable normalizing constant.
1 code implementation • 19 Oct 2022 • Jerome Baum, Heishiro Kanagawa, Arthur Gretton
We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences.
1 code implementation • NeurIPS 2021 • Liyuan Xu, Heishiro Kanagawa, Arthur Gretton
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder.
no code implementations • 23 Aug 2020 • Li K. Wenliang, Heishiro Kanagawa
Statistical tasks such as density estimation and approximate Bayesian inference often involve densities with unknown normalising constants.
1 code implementation • 24 Feb 2020 • Wittawat Jitkrittum, Heishiro Kanagawa, Bernhard Schölkopf
We propose two nonparametric statistical tests of goodness of fit for conditional distributions: given a conditional probability density function $p(y|x)$ and a joint sample, decide whether the sample is drawn from $p(y|x)r_x(x)$ for some density $r_x$.
no code implementations • ICML 2020 • Li K. Wenliang, Theodore Moskovitz, Heishiro Kanagawa, Maneesh Sahani
Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable.
1 code implementation • 1 Jul 2019 • Heishiro Kanagawa, Wittawat Jitkrittum, Lester Mackey, Kenji Fukumizu, Arthur Gretton
We propose a kernel-based nonparametric test of relative goodness of fit, where the goal is to compare two models, both of which may have unobserved latent variables, such that the marginal distribution of the observed variables is intractable.
3 code implementations • NeurIPS 2018 • Wittawat Jitkrittum, Heishiro Kanagawa, Patsorn Sangkloy, James Hays, Bernhard Schölkopf, Arthur Gretton
Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models.
no code implementations • 8 Mar 2018 • Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, Taiji Suzuki
The behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used.
no code implementations • NeurIPS 2016 • Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami
We investigate the statistical performance and computational efficiency of the alternating minimization procedure for nonparametric tensor learning.