Search Results for author: Bharath Sriperumbudur

Found 13 papers, 3 papers with code

Minimax Optimal Goodness-of-Fit Testing with Kernel Stein Discrepancy

no code implementations12 Apr 2024 Omar Hagrass, Bharath Sriperumbudur, Krishnakumar Balasubramanian

We explore the minimax optimality of goodness-of-fit tests on general domains using the kernelized Stein discrepancy (KSD).

Computational Efficiency

Statistical Optimality and Computational Efficiency of Nyström Kernel PCA

no code implementations19 May 2021 Nicholas Sterge, Bharath Sriperumbudur

Various approximation schemes have been proposed in the literature to alleviate these computational issues, and the approximate kernel machines are shown to retain the empirical performance.

Computational Efficiency

Robust Persistence Diagrams using Reproducing Kernels

1 code implementation NeurIPS 2020 Siddharth Vishwanath, Kenji Fukumizu, Satoshi Kuriki, Bharath Sriperumbudur

Persistent homology has become an important tool for extracting geometric and topological features from data, whose multi-scale features are summarized in a persistence diagram.

Gain with no Pain: Efficient Kernel-PCA by Nyström Sampling

no code implementations11 Jul 2019 Nicholas Sterge, Bharath Sriperumbudur, Lorenzo Rosasco, Alessandro Rudi

In this paper, we propose and study a Nystr\"om based approach to efficient large scale kernel principal component analysis (PCA).

Computational Efficiency

Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off

no code implementations20 Jun 2017 Bharath Sriperumbudur, Nicholas Sterge

We show that the approximate KPCA is both computationally and statistically efficient compared to KPCA in terms of the error associated with reconstructing a kernel function based on its projection onto the corresponding eigenspaces.

Kernel Mean Embedding of Distributions: A Review and Beyond

no code implementations31 May 2016 Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf

Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications.

Causal Discovery Two-sample testing

Learning Theory for Distribution Regression

1 code implementation8 Nov 2014 Zoltan Szabo, Bharath Sriperumbudur, Barnabas Poczos, Arthur Gretton

In this paper, we study a simple, analytically computable, ridge regression-based alternative to distribution regression, where we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs.

Density Estimation Learning Theory +2

Kernel Mean Estimation via Spectral Filtering

no code implementations NeurIPS 2014 Krikamol Muandet, Bharath Sriperumbudur, Bernhard Schölkopf

The problem of estimating the kernel mean in a reproducing kernel Hilbert space (RKHS) is central to kernel methods in that it is used by classical approaches (e. g., when centering a kernel PCA matrix), and it also forms the core inference step of modern kernel methods (e. g., kernel-based non-parametric tests) that rely on embedding probability distributions in RKHSs.

Kernel Mean Shrinkage Estimators

no code implementations21 May 2014 Krikamol Muandet, Bharath Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf

A mean function in a reproducing kernel Hilbert space (RKHS), or a kernel mean, is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis, and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in RKHSs.

Two-stage Sampled Learning Theory on Distributions

no code implementations7 Feb 2014 Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur

To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean.

Density Estimation Learning Theory +3

Density Estimation in Infinite Dimensional Exponential Families

1 code implementation12 Dec 2013 Bharath Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Aapo Hyvärinen, Revant Kumar

When $p_0\in\mathcal{P}$, we show that the proposed estimator is consistent, and provide a convergence rate of $n^{-\min\left\{\frac{2}{3},\frac{2\beta+1}{2\beta+2}\right\}}$ in Fisher divergence under the smoothness assumption that $\log p_0\in\mathcal{R}(C^\beta)$ for some $\beta\ge 0$, where $C$ is a certain Hilbert-Schmidt operator on $H$ and $\mathcal{R}(C^\beta)$ denotes the image of $C^\beta$.

Density Estimation

Kernel Mean Estimation and Stein's Effect

no code implementations4 Jun 2013 Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Schölkopf

A mean function in reproducing kernel Hilbert space, or a kernel mean, is an important part of many applications ranging from kernel principal component analysis to Hilbert-space embedding of distributions.

Equivalence of distance-based and RKHS-based statistics in hypothesis testing

no code implementations25 Jul 2012 Dino Sejdinovic, Bharath Sriperumbudur, Arthur Gretton, Kenji Fukumizu

We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, maximum mean discrepancies (MMD), that is, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning.

Two-sample testing

Cannot find the paper you are looking for? You can Submit a new open access paper.