Search Results for author: Leena Chennuru Vankadara

Found 9 papers, 1 papers with code

Interpolation and Regularization for Causal Learning

no code implementations18 Feb 2022 Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar

If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks

no code implementations NeurIPS 2021 Pascal Mattia Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

While VC Dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.

Learning Theory Node Classification

Causal Forecasting:Generalization Bounds for Autoregressive Models

no code implementations18 Nov 2021 Leena Chennuru Vankadara, Philipp Michael Faller, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing

Here, we study the problem of *causal generalization* -- generalizing from the observational to interventional distributions -- in forecasting.

Learning Theory Time Series

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

Insights into Ordinal Embedding Algorithms: A Systematic Evaluation

no code implementations3 Dec 2019 Leena Chennuru Vankadara, Siavash Haghiri, Michael Lohaus, Faiz Ul Wahab, Ulrike Von Luxburg

However, there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain unanswered: Which algorithms perform better when the embedding dimension is constrained or few triplet comparisons are available?

Representation Learning

On the optimality of kernels for high-dimensional clustering

no code implementations1 Dec 2019 Leena Chennuru Vankadara, Debarghya Ghoshdastidar

This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.

LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS

no code implementations25 Sep 2019 Siavash Haghiri, Leena Chennuru Vankadara, Ulrike Von Luxburg

This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding".

Representation Learning

Measures of distortion for machine learning

no code implementations NeurIPS 2018 Leena Chennuru Vankadara, Ulrike Von Luxburg

We can show both in theory and in experiments that it satisfies all desirable properties and is a better candidate to evaluate distortion in the context of machine learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.