Search Results for author: Leena Chennuru Vankadara

Found 13 papers, 3 papers with code

Measures of distortion for machine learning

no code implementations NeurIPS 2018 Leena Chennuru Vankadara, Ulrike Von Luxburg

We can show both in theory and in experiments that it satisfies all desirable properties and is a better candidate to evaluate distortion in the context of machine learning.

BIG-bench Machine Learning

LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS

no code implementations25 Sep 2019 Siavash Haghiri, Leena Chennuru Vankadara, Ulrike Von Luxburg

This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding".

Representation Learning

On the optimality of kernels for high-dimensional clustering

no code implementations1 Dec 2019 Leena Chennuru Vankadara, Debarghya Ghoshdastidar

This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.

Clustering Vocal Bursts Intensity Prediction

Insights into Ordinal Embedding Algorithms: A Systematic Evaluation

no code implementations3 Dec 2019 Leena Chennuru Vankadara, Siavash Haghiri, Michael Lohaus, Faiz Ul Wahab, Ulrike Von Luxburg

However, there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain unanswered: Which algorithms perform better when the embedding dimension is constrained or few triplet comparisons are available?

Representation Learning

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

Clustering

Causal Forecasting:Generalization Bounds for Autoregressive Models

1 code implementation18 Nov 2021 Leena Chennuru Vankadara, Philipp Michael Faller, Michaela Hardt, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing

Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts, albeit with additional structure (restriction to interventional distributions under the VAR model).

Learning Theory Time Series +1

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

no code implementations NeurIPS 2021 Pascal Mattia Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

While VC Dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.

Learning Theory Node Classification

Interpolation and Regularization for Causal Learning

no code implementations18 Feb 2022 Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar

If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.

A Consistent Estimator for Confounding Strength

no code implementations3 Nov 2022 Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike Von Luxburg

Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding.

regression

Reinterpreting causal discovery as the task of predicting unobserved joint statistics

no code implementations11 May 2023 Dominik Janzing, Philipp M. Faller, Leena Chennuru Vankadara

Here, causal discovery becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is `true' a causal hypothesis is {\it useful} whenever it correctly predicts statistical properties of unobserved joint distributions.

Causal Discovery Causal Inference +1

Self-Compatibility: Evaluating Causal Discovery without Ground Truth

1 code implementation18 Jul 2023 Philipp M. Faller, Leena Chennuru Vankadara, Atalanti A. Mastakouri, Francesco Locatello, Dominik Janzing

In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.

Causal Discovery Model Selection

Explaining Kernel Clustering via Decision Trees

no code implementations15 Feb 2024 Maximilian Fleissner, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.

Clustering Interpretable Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.