1 code implementation • 3 Jun 2022 • Siddharth Vishwanath, Bharath K. Sriperumbudur, Kenji Fukumizu, Satoshi Kuriki
The distance function to a compact set plays a crucial role in the paradigm of topological data analysis.
no code implementations • 28 Aug 2017 • Zoltan Szabo, Bharath K. Sriperumbudur
Maximum mean discrepancy (MMD), also called energy distance or N-distance in statistics and Hilbert-Schmidt independence criterion (HSIC), specifically distance covariance in statistics, are among the most popular and successful approaches to quantify the difference and independence of random variables, respectively.
no code implementations • 30 Mar 2018 • Shashank Singh, Bharath K. Sriperumbudur, Barnabás Póczos
We study estimation of (semi-)inner products between two nonparametric probability distributions, given IID samples from each distribution.
1 code implementation • 1 Sep 2017 • Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu
This paper presents a convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces.
no code implementations • 17 Aug 2017 • Ingo Steinwart, Bharath K. Sriperumbudur, Philipp Thomann
We derive and analyze a generic, recursive algorithm for estimating all splits in a finite cluster tree as well as the corresponding clusters.
no code implementations • NeurIPS 2016 • Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu
Kernel-based quadrature rules are becoming important in machine learning and statistics, as they achieve super-$\sqrt{n}$ convergence rates in numerical integration, and thus provide alternatives to Monte Carlo integration in challenging settings where integrands are expensive to evaluate or where integrands are high dimensional.
no code implementations • NeurIPS 2015 • Bharath K. Sriperumbudur, Zoltan Szabo
Kernel methods represent one of the most powerful tools in machine learning to tackle problems expressed in terms of function values and derivatives due to their capability to represent and model complex relations.
no code implementations • 11 May 2013 • Purushottam Kar, Bharath K. Sriperumbudur, Prateek Jain, Harish C Karnick
We are also able to analyze a class of memory efficient online learning algorithms for pairwise learning problems that use only a bounded subset of past training samples to update the hypothesis at each step.
no code implementations • 30 Jul 2009 • Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, Gert R. G. Lanckriet
First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted {\em characteristic kernels}.
no code implementations • 6 Jul 2018 • Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, Bharath K. Sriperumbudur
This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other.
no code implementations • 18 Jan 2009 • Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Gert R. G. Lanckriet
First, to understand the relation between IPMs and $\phi$-divergences, the necessary and sufficient conditions under which these classes intersect are derived: the total variation distance is shown to be the only non-trivial $\phi$-divergence that is also an IPM.
Information Theory Information Theory
no code implementations • 11 Oct 2018 • Zoltan Szabo, Bharath K. Sriperumbudur
Random Fourier features (RFF) represent one of the most popular and wide-spread techniques in machine learning to scale up kernel algorithms.
no code implementations • NeurIPS 2016 • Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Bernhard Schölkopf
Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing.
no code implementations • NeurIPS 2012 • Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji Fukumizu, Bharath K. Sriperumbudur
A means of parameter selection for the two-sample test based on the MMD is proposed.
no code implementations • NeurIPS 2011 • Kenji Fukumizu, Gert R. Lanckriet, Bharath K. Sriperumbudur
The goal of this paper is to investigate the advantages and disadvantages of learning in Banach spaces over Hilbert spaces.
no code implementations • NeurIPS 2009 • Arthur Gretton, Kenji Fukumizu, Zaïd Harchaoui, Bharath K. Sriperumbudur
A kernel embedding of probability distributions into reproducing kernel Hilbert spaces (RKHS) has recently been proposed, which allows the comparison of two probability measures P and Q based on the distance between their respective embeddings: for a sufficiently rich RKHS, this distance is zero if and only if P and Q coincide.
no code implementations • NeurIPS 2009 • Gert R. Lanckriet, Bharath K. Sriperumbudur
In this paper, we follow a different reasoning and show how Zangwills global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP, allowing a more elegant and simple proof.
no code implementations • NeurIPS 2008 • Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Bharath K. Sriperumbudur
Embeddings of random variables in reproducing kernel Hilbert spaces (RKHSs) may be used to conduct statistical inference based on higher order moments.
no code implementations • 1 Feb 2019 • Joseph Lam-Weil, Alexandra Carpentier, Bharath K. Sriperumbudur
We consider the closeness testing problem for discrete distributions.
no code implementations • 16 Aug 2019 • Samory Kpotufe, Bharath K. Sriperumbudur
The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix $\boldsymbol K$ yields an operator whose counterpart in an RKHS $\mathcal H$, is a \emph{random projection} operator---in the spirit of Johnson-Lindenstrauss (J-L) lemma.
no code implementations • 2 Dec 2019 • Tianhong Sheng, Bharath K. Sriperumbudur
For certain distance and kernel pairs, we show the distance-based conditional independence measures to be equivalent to that of kernel-based measures.
no code implementations • 22 Nov 2021 • Zhengxin Zhang, Youssef Mroueh, Ziv Goldfeld, Bharath K. Sriperumbudur
Discrepancy measures between probability distributions are at the core of statistical inference and machine learning.
no code implementations • 13 Jul 2022 • Saiteja Utpala, Bharath K. Sriperumbudur
We propose estimators that shrink the $U$-statistic estimator of the Bochner integral towards a pre-specified target element in the Hilbert space.
no code implementations • 15 Nov 2022 • Ye He, Krishnakumar Balasubramanian, Bharath K. Sriperumbudur, Jianfeng Lu
In this work, we propose the Regularized Stein Variational Gradient Flow which interpolates between the Stein Variational Gradient Flow and the Wasserstein Gradient Flow.
no code implementations • 19 Dec 2022 • Omar Hagrass, Bharath K. Sriperumbudur, Bing Li
First, we show that the popular MMD (maximum mean discrepancy) two-sample test is not optimal in terms of the separation boundary measured in Hellinger distance.
no code implementations • 29 Jun 2023 • Sakshi Arya, Bharath K. Sriperumbudur
We also show that for any choice of kernel and the corresponding RKHS, we achieve a sub-linear regret rate depending on the intrinsic dimensionality of the RKHS.
no code implementations • 8 Aug 2023 • Omar Hagrass, Bharath K. Sriperumbudur, Bing Li
Maximum mean discrepancy (MMD) has enjoyed a lot of success in many machine learning and statistical applications, including non-parametric hypothesis testing, because of its ability to handle non-Euclidean data.