Search Results for author: Zoltan Szabo

Found 17 papers, 8 papers with code

The Minimax Rate of HSIC Estimation for Translation-Invariant Kernels

no code implementations12 Mar 2024 Florian Kalinke, Zoltan Szabo

Kernel techniques are among the most influential approaches in data science and statistics.

Translation

Random Fourier Signature Features

1 code implementation20 Nov 2023 Csaba Toth, Harald Oberhauser, Zoltan Szabo

Tensor algebras give rise to one of the most powerful measures of similarity for sequences of arbitrary length called the signature kernel accompanied with attractive theoretical guarantees from stochastic analysis.

Time Series

Functional Output Regression with Infimal Convolution: Exploring the Huber and $ε$-insensitive Losses

1 code implementation16 Jun 2022 Alex Lambert, Dimitri Bouche, Zoltan Szabo, Florence d'Alché-Buc

The efficiency of the approach is demonstrated and contrasted with the classical squared loss setting on both synthetic and real-world benchmarks.

regression

Handling Hard Affine SDP Shape Constraints in RKHSs

no code implementations5 Jan 2021 Pierre-Cyril Aubin-Frankowski, Zoltan Szabo

The modular nature of the proposed approach allows to simultaneously handle multiple shape constraints, and to tighten an infinite number of constraints into finitely many.

Econometrics

Hard Shape-Constrained Kernel Machines

1 code implementation NeurIPS 2020 Pierre-Cyril Aubin-Frankowski, Zoltan Szabo

Shape constraints (such as non-negativity, monotonicity, convexity) play a central role in a large number of applications, as they usually improve performance for small sample size and help interpretability.

On Kernel Derivative Approximation with Random Fourier Features

no code implementations11 Oct 2018 Zoltan Szabo, Bharath K. Sriperumbudur

Random Fourier features (RFF) represent one of the most popular and wide-spread techniques in machine learning to scale up kernel algorithms.

MONK -- Outlier-Robust Mean Embedding Estimation by Median-of-Means

no code implementations13 Feb 2018 Matthieu Lerasle, Zoltan Szabo, Timothee Mathieu, Guillaume Lecue

Mean embeddings provide an extremely flexible and powerful tool in machine learning and statistics to represent probability distributions and define a semi-metric (MMD, maximum mean discrepancy; also called N-distance or energy distance), with numerous successful applications.

Characteristic and Universal Tensor Product Kernels

no code implementations28 Aug 2017 Zoltan Szabo, Bharath K. Sriperumbudur

Maximum mean discrepancy (MMD), also called energy distance or N-distance in statistics and Hilbert-Schmidt independence criterion (HSIC), specifically distance covariance in statistics, are among the most popular and successful approaches to quantify the difference and independence of random variables, respectively.

A Linear-Time Kernel Goodness-of-Fit Test

4 code implementations NeurIPS 2017 Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur Gretton

We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples.

An Adaptive Test of Independence with Analytic Kernel Embeddings

1 code implementation ICML 2017 Wittawat Jitkrittum, Zoltan Szabo, Arthur Gretton

The dependence measure is the difference between analytic embeddings of the joint distribution and the product of the marginals, evaluated at a finite set of locations (features).

Interpretable Distribution Features with Maximum Testing Power

1 code implementation NeurIPS 2016 Wittawat Jitkrittum, Zoltan Szabo, Kacper Chwialkowski, Arthur Gretton

Two semimetrics on probability distributions are proposed, given as the sum of differences of expectations of analytic functions evaluated at spatial or frequency locations (i. e, features).

Optimal Rates for Random Fourier Features

no code implementations NeurIPS 2015 Bharath K. Sriperumbudur, Zoltan Szabo

Kernel methods represent one of the most powerful tools in machine learning to tackle problems expressed in terms of function values and derivatives due to their capability to represent and model complex relations.

Learning Theory for Distribution Regression

1 code implementation8 Nov 2014 Zoltan Szabo, Bharath Sriperumbudur, Barnabas Poczos, Arthur Gretton

In this paper, we study a simple, analytically computable, ridge regression-based alternative to distribution regression, where we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs.

Density Estimation Learning Theory +2

Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)

no code implementations NeurIPS 2015 Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani

We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.

Two-stage Sampled Learning Theory on Distributions

no code implementations7 Feb 2014 Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur

To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean.

Density Estimation Learning Theory +3

Emotional Expression Classification using Time-Series Kernels

no code implementations8 Jun 2013 Andras Lorincz, Laszlo Jeni, Zoltan Szabo, Jeffrey Cohn, Takeo Kanade

Estimation of facial expressions, as spatio-temporal processes, can take advantage of kernel methods if one considers facial landmark positions and their motion in 3D space.

Classification Dynamic Time Warping +3

Cannot find the paper you are looking for? You can Submit a new open access paper.