no code implementations • NeurIPS 2018 • Leena Chennuru Vankadara, Ulrike Von Luxburg
We can show both in theory and in experiments that it satisfies all desirable properties and is a better candidate to evaluate distortion in the context of machine learning.
no code implementations • 25 Sep 2019 • Siavash Haghiri, Leena Chennuru Vankadara, Ulrike Von Luxburg
This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding".
no code implementations • 1 Dec 2019 • Leena Chennuru Vankadara, Debarghya Ghoshdastidar
This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.
no code implementations • 3 Dec 2019 • Leena Chennuru Vankadara, Siavash Haghiri, Michael Lohaus, Faiz Ul Wahab, Ulrike Von Luxburg
However, there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain unanswered: Which algorithms perform better when the embedding dimension is constrained or few triplet comparisons are available?
1 code implementation • ICLR 2022 • Mahalakshmi Sabanayagam, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results.
no code implementations • 18 Oct 2021 • Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar
Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.
1 code implementation • 18 Nov 2021 • Leena Chennuru Vankadara, Philipp Michael Faller, Michaela Hardt, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing
Under causal sufficiency, the problem of causal generalization amounts to learning under covariate shifts, albeit with additional structure (restriction to interventional distributions under the VAR model).
no code implementations • NeurIPS 2021 • Pascal Mattia Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
While VC Dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.
no code implementations • 18 Feb 2022 • Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar
If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.
no code implementations • 3 Nov 2022 • Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike Von Luxburg
Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding.
no code implementations • 11 May 2023 • Dominik Janzing, Philipp M. Faller, Leena Chennuru Vankadara
Here, causal discovery becomes more modest and better accessible to empirical tests than usual: rather than trying to find a causal hypothesis that is `true' a causal hypothesis is {\it useful} whenever it correctly predicts statistical properties of unobserved joint distributions.
1 code implementation • 18 Jul 2023 • Philipp M. Faller, Leena Chennuru Vankadara, Atalanti A. Mastakouri, Francesco Locatello, Dominik Janzing
In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth.
no code implementations • 15 Feb 2024 • Maximilian Fleissner, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.