Search Results for author: Bharath K. Sriperumbudur

Found 22 papers, 2 papers with code

Robust Topological Inference in the Presence of Outliers

1 code implementation3 Jun 2022 Siddharth Vishwanath, Bharath K. Sriperumbudur, Kenji Fukumizu, Satoshi Kuriki

The distance function to a compact set plays a crucial role in the paradigm of topological data analysis.

Topological Data Analysis

Cycle Consistent Probability Divergences Across Different Spaces

no code implementations22 Nov 2021 Zhengxin Zhang, Youssef Mroueh, Ziv Goldfeld, Bharath K. Sriperumbudur

Discrepancy measures between probability distributions are at the core of statistical inference and machine learning.

On Distance and Kernel Measures of Conditional Independence

no code implementations2 Dec 2019 Tianhong Sheng, Bharath K. Sriperumbudur

For certain distance and kernel pairs, we show the distance-based conditional independence measures to be equivalent to that of kernel-based measures.

Causal Discovery Dimensionality Reduction +1

Gaussian Sketching yields a J-L Lemma in RKHS

no code implementations16 Aug 2019 Samory Kpotufe, Bharath K. Sriperumbudur

The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix $\boldsymbol K$ yields an operator whose counterpart in an RKHS $\mathcal H$, is a \emph{random projection} operator---in the spirit of Johnson-Lindenstrauss (J-L) lemma.

On Kernel Derivative Approximation with Random Fourier Features

no code implementations11 Oct 2018 Zoltan Szabo, Bharath K. Sriperumbudur

Random Fourier features (RFF) represent one of the most popular and wide-spread techniques in machine learning to scale up kernel algorithms.

Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences

no code implementations6 Jul 2018 Motonobu Kanagawa, Philipp Hennig, Dino Sejdinovic, Bharath K. Sriperumbudur

This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other.

Gaussian Processes

Minimax Estimation of Quadratic Fourier Functionals

no code implementations30 Mar 2018 Shashank Singh, Bharath K. Sriperumbudur, Barnabás Póczos

We study estimation of (semi-)inner products between two nonparametric probability distributions, given IID samples from each distribution.

Translation

Convergence Analysis of Deterministic Kernel-Based Quadrature Rules in Misspecified Settings

1 code implementation1 Sep 2017 Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu

This paper presents a convergence analysis of kernel-based quadrature rules in misspecified settings, focusing on deterministic quadrature in Sobolev spaces.

Characteristic and Universal Tensor Product Kernels

no code implementations28 Aug 2017 Zoltan Szabo, Bharath K. Sriperumbudur

Maximum mean discrepancy (MMD), also called energy distance or N-distance in statistics and Hilbert-Schmidt independence criterion (HSIC), specifically distance covariance in statistics, are among the most popular and successful approaches to quantify the difference and independence of random variables, respectively.

Adaptive Clustering Using Kernel Density Estimators

no code implementations17 Aug 2017 Ingo Steinwart, Bharath K. Sriperumbudur, Philipp Thomann

We derive and analyze a generic, recursive algorithm for estimating all splits in a finite cluster tree as well as the corresponding clusters.

Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels

no code implementations NeurIPS 2016 Ilya O. Tolstikhin, Bharath K. Sriperumbudur, Bernhard Schölkopf

Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing.

Convergence guarantees for kernel-based quadrature rules in misspecified settings

no code implementations NeurIPS 2016 Motonobu Kanagawa, Bharath K. Sriperumbudur, Kenji Fukumizu

Kernel-based quadrature rules are becoming important in machine learning and statistics, as they achieve super-$\sqrt{n}$ convergence rates in numerical integration, and thus provide alternatives to Monte Carlo integration in challenging settings where integrands are expensive to evaluate or where integrands are high dimensional.

Numerical Integration

Optimal Rates for Random Fourier Features

no code implementations NeurIPS 2015 Bharath K. Sriperumbudur, Zoltan Szabo

Kernel methods represent one of the most powerful tools in machine learning to tackle problems expressed in terms of function values and derivatives due to their capability to represent and model complex relations.

On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions

no code implementations11 May 2013 Purushottam Kar, Bharath K. Sriperumbudur, Prateek Jain, Harish C Karnick

We are also able to analyze a class of memory efficient online learning algorithms for pairwise learning problems that use only a bounded subset of past training samples to update the hypothesis at each step.

Generalization Bounds Metric Learning +1

Learning in Hilbert vs. Banach Spaces: A Measure Embedding Viewpoint

no code implementations NeurIPS 2011 Kenji Fukumizu, Gert R. Lanckriet, Bharath K. Sriperumbudur

The goal of this paper is to investigate the advantages and disadvantages of learning in Banach spaces over Hilbert spaces.

A Fast, Consistent Kernel Two-Sample Test

no code implementations NeurIPS 2009 Arthur Gretton, Kenji Fukumizu, Zaïd Harchaoui, Bharath K. Sriperumbudur

A kernel embedding of probability distributions into reproducing kernel Hilbert spaces (RKHS) has recently been proposed, which allows the comparison of two probability measures P and Q based on the distance between their respective embeddings: for a sufficiently rich RKHS, this distance is zero if and only if P and Q coincide.

On the Convergence of the Concave-Convex Procedure

no code implementations NeurIPS 2009 Gert R. Lanckriet, Bharath K. Sriperumbudur

In this paper, we follow a different reasoning and show how Zangwills global convergence theory of iterative algorithms provides a natural framework to prove the convergence of CCCP, allowing a more elegant and simple proof.

Hilbert space embeddings and metrics on probability measures

no code implementations30 Jul 2009 Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, Gert R. G. Lanckriet

First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted {\em characteristic kernels}.

Dimensionality Reduction

On integral probability metrics, φ-divergences and binary classification

no code implementations18 Jan 2009 Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Gert R. G. Lanckriet

First, to understand the relation between IPMs and $\phi$-divergences, the necessary and sufficient conditions under which these classes intersect are derived: the total variation distance is shown to be the only non-trivial $\phi$-divergence that is also an IPM.

Information Theory Information Theory

Characteristic Kernels on Groups and Semigroups

no code implementations NeurIPS 2008 Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Bharath K. Sriperumbudur

Embeddings of random variables in reproducing kernel Hilbert spaces (RKHSs) may be used to conduct statistical inference based on higher order moments.

Cannot find the paper you are looking for? You can Submit a new open access paper.