1 code implementation • ICML 2020 • Michael Lohaus, Michaël Perrot, Ulrike Von Luxburg
We address the problem of classification under fairness constraints.
no code implementations • ICML 2020 • Luca Rendsburg, Holger Heidrich, Ulrike Von Luxburg
In this paper, we investigate the implicit bias of NetGAN.
no code implementations • 5 Feb 2024 • Sebastian Bordt, Ulrike Von Luxburg
In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used.
1 code implementation • NeurIPS 2023 • Moritz Haas, David Holzmüller, Ulrike Von Luxburg, Ingo Steinwart
In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough.
1 code implementation • 8 Mar 2023 • Sebastian Bordt, Ulrike Von Luxburg
We asked ChatGPT to participate in an undergraduate computer science exam on ''Algorithms and Data Structures''.
no code implementations • 7 Mar 2023 • Philipp Berens, Kyle Cranmer, Neil D. Lawrence, Ulrike Von Luxburg, Jessica Montgomery
This report summarises the discussions from the seminar and provides a roadmap to suggest how different communities can collaborate to deliver a new wave of progress in AI and its application for scientific discovery.
1 code implementation • 5 Nov 2022 • Moritz Haas, Bedartha Goswami, Ulrike Von Luxburg
Network-based analyses of dynamical systems have become increasingly popular in climate science.
no code implementations • 3 Nov 2022 • Solveig Klepper, Ulrike Von Luxburg
In our work, we prove that the solution space induced by graph auto-encoders is a subset of the solution space of a linear map.
no code implementations • 3 Nov 2022 • Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike Von Luxburg
Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding.
1 code implementation • 8 Sep 2022 • Sebastian Bordt, Ulrike Von Luxburg
We then show that $n$-Shapley Values, as well as the Shapley Taylor- and Faith-Shap interaction indices, recover GAMs with interaction terms up to order $n$.
no code implementations • 15 Jun 2022 • Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike Von Luxburg
We propose a necessary criterion: their feature attributions need to be aligned with the tangent space of the data manifold.
1 code implementation • 7 Mar 2022 • Luca Rendsburg, Agustinus Kristiadi, Philipp Hennig, Ulrike Von Luxburg
By reframing the problem in terms of incompatible conditional distributions we arrive at a natural solution: the Gibbs prior.
no code implementations • 18 Feb 2022 • Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar
If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.
1 code implementation • 25 Jan 2022 • Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike Von Luxburg
In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.
no code implementations • 18 Oct 2021 • Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar
Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.
no code implementations • 9 Jul 2021 • Sascha Meyen, Frieder Göppert, Helen Alber, Ulrike Von Luxburg, Volker H. Franz
We explicitly construct the individual classifiers that attain the upper and lower bounds: specialists and generalists.
1 code implementation • 25 Aug 2020 • Damien Garreau, Ulrike Von Luxburg
As an example, for linear functions we show that LIME has the desirable property to provide explanations that are proportional to the coefficients of the function to explain and to ignore coordinates that are not used by the function to explain.
no code implementations • 9 Jul 2020 • Sebastian Bordt, Ulrike Von Luxburg
A lower bound quantifies the worst-case hardness of optimally advising a decision maker who is opaque or has access to private information.
1 code implementation • 25 Jun 2020 • Solveig Klepper, Christian Elbracht, Diego Fioravanti, Jakob Kneip, Luca Rendsburg, Maximilian Teegen, Ulrike Von Luxburg
Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure.
no code implementations • 10 Jan 2020 • Damien Garreau, Ulrike Von Luxburg
We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear.
no code implementations • 3 Dec 2019 • Leena Chennuru Vankadara, Siavash Haghiri, Michael Lohaus, Faiz Ul Wahab, Ulrike Von Luxburg
However, there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain unanswered: Which algorithms perform better when the embedding dimension is constrained or few triplet comparisons are available?
no code implementations • 25 Sep 2019 • Siavash Haghiri, Leena Chennuru Vankadara, Ulrike Von Luxburg
This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding".
no code implementations • 21 Aug 2019 • Siavash Haghiri, Felix Wichmann, Ulrike Von Luxburg
We propose to use ordinal embedding methods from machine learning to estimate the scaling function from the relative judgments.
no code implementations • 27 Jun 2019 • Michael Lohaus, Philipp Hennig, Ulrike Von Luxburg
To investigate objects without a describable notion of distance, one can gather ordinal information by asking triplet comparisons of the form "Is object $x$ closer to $y$ or is $x$ closer to $z$?"
no code implementations • 17 May 2019 • Siavash Haghiri, Patricia Rubisch, Robert Geirhos, Felix Wichmann, Ulrike Von Luxburg
In this paper we study whether the use of comparison-based (ordinal) data, combined with machine learning algorithms, can boost the reliability of crowdsourcing studies for psychophysics, such that they can achieve performance close to a lab experiment.
no code implementations • NeurIPS 2018 • Cheng Tang, Damien Garreau, Ulrike Von Luxburg
As a consequence, even highly randomized trees can lead to inconsistent forests if no subsampling is used, which implies that some of the commonly used setups for random forests can be inconsistent.
no code implementations • NeurIPS 2018 • Leena Chennuru Vankadara, Ulrike Von Luxburg
We can show both in theory and in experiments that it satisfies all desirable properties and is a better candidate to evaluate distortion in the context of machine learning.
1 code implementation • NeurIPS 2018 • Debarghya Ghoshdastidar, Ulrike Von Luxburg
Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs.
1 code implementation • NeurIPS 2019 • Debarghya Ghoshdastidar, Michaël Perrot, Ulrike Von Luxburg
We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities.
no code implementations • 31 Oct 2018 • Michaël Perrot, Ulrike Von Luxburg
We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form "object $x_i$ is closer to object $x_j$ than to object $x_k$."
no code implementations • ICML 2018 • Siavash Haghiri, Damien Garreau, Ulrike Von Luxburg
Assume we are given a set of items from a general metric space, but we neither have access to the representation of the data nor to the distances between data points.
1 code implementation • 31 Aug 2017 • Nihar B. Shah, Behzad Tabibian, Krikamol Muandet, Isabelle Guyon, Ulrike Von Luxburg
Neural Information Processing Systems (NIPS) is a top-tier annual conference in machine learning.
no code implementations • 4 Jul 2017 • Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg
Given a population of $m$ graphs from each model, we derive minimax separation rates for the problem of testing $P=Q$ against $d(P, Q)>\rho$.
no code implementations • 17 May 2017 • Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg
We consider a two-sample hypothesis testing problem, where the distributions are defined on the space of undirected graphs, and one has access to only one observation from each model.
no code implementations • 5 Apr 2017 • Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike Von Luxburg
We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points.
no code implementations • NeurIPS 2017 • Matthäus Kleindessner, Ulrike Von Luxburg
Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set.
no code implementations • 23 Feb 2016 • Matthäus Kleindessner, Ulrike Von Luxburg
In recent years it has become popular to study machine learning problems in a setting of ordinal distance information rather than numerical distance measurements.
no code implementations • 2 Jun 2015 • Mehdi S. M. Sajjadi, Morteza Alamgir, Ulrike Von Luxburg
Peer grading is the process of students reviewing each others' work, such as homework submissions, and has lately become a popular mechanism used in massive open online courses (MOOCs).
no code implementations • 5 Jun 2014 • Kamalika Chaudhuri, Sanjoy Dasgupta, Samory Kpotufe, Ulrike Von Luxburg
For a density $f$ on ${\mathbb R}^d$, a {\it high-density cluster} is any connected component of $\{x: f(x) \geq \lambda\}$, for some $\lambda > 0$.
no code implementations • NeurIPS 2013 • Ulrike Von Luxburg, Morteza Alamgir
Consider an unweighted k-nearest neighbor graph on n points that have been sampled i. i. d.
4 code implementations • 1 Nov 2007 • Ulrike von Luxburg
In recent years, spectral clustering has become one of the most popular modern clustering algorithms.