Search Results for author: Ulrike Von Luxburg

Found 41 papers, 13 papers with code

Statistics without Interpretation: A Sober Look at Explainable Machine Learning

no code implementations5 Feb 2024 Sebastian Bordt, Ulrike Von Luxburg

In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used.

Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension

1 code implementation NeurIPS 2023 Moritz Haas, David Holzmüller, Ulrike Von Luxburg, Ingo Steinwart

In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough.

regression

ChatGPT Participates in a Computer Science Exam

1 code implementation8 Mar 2023 Sebastian Bordt, Ulrike Von Luxburg

We asked ChatGPT to participate in an undergraduate computer science exam on ''Algorithms and Data Structures''.

AI for Science: An Emerging Agenda

no code implementations7 Mar 2023 Philipp Berens, Kyle Cranmer, Neil D. Lawrence, Ulrike Von Luxburg, Jessica Montgomery

This report summarises the discussions from the seminar and provides a roadmap to suggest how different communities can collaborate to deliver a new wave of progress in AI and its application for scientific discovery.

Pitfalls of Climate Network Construction: A Statistical Perspective

1 code implementation5 Nov 2022 Moritz Haas, Bedartha Goswami, Ulrike Von Luxburg

Network-based analyses of dynamical systems have become increasingly popular in climate science.

Relating graph auto-encoders to linear models

no code implementations3 Nov 2022 Solveig Klepper, Ulrike Von Luxburg

In our work, we prove that the solution space induced by graph auto-encoders is a subset of the solution space of a linear map.

Inductive Bias

A Consistent Estimator for Confounding Strength

no code implementations3 Nov 2022 Luca Rendsburg, Leena Chennuru Vankadara, Debarghya Ghoshdastidar, Ulrike Von Luxburg

Regression on observational data can fail to capture a causal relationship in the presence of unobserved confounding.

regression

From Shapley Values to Generalized Additive Models and back

1 code implementation8 Sep 2022 Sebastian Bordt, Ulrike Von Luxburg

We then show that $n$-Shapley Values, as well as the Shapley Taylor- and Faith-Shap interaction indices, recover GAMs with interaction terms up to order $n$.

Additive models

The Manifold Hypothesis for Gradient-Based Explanations

no code implementations15 Jun 2022 Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike Von Luxburg

We propose a necessary criterion: their feature attributions need to be aligned with the tangent space of the data manifold.

Diabetic Retinopathy Detection

Interpolation and Regularization for Causal Learning

no code implementations18 Feb 2022 Leena Chennuru Vankadara, Luca Rendsburg, Ulrike Von Luxburg, Debarghya Ghoshdastidar

If the confounding strength is negative, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and the optimal regularization can even be negative.

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

1 code implementation25 Jan 2022 Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike Von Luxburg

In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

Clustering

Specialists Outperform Generalists in Ensemble Classification

no code implementations9 Jul 2021 Sascha Meyen, Frieder Göppert, Helen Alber, Ulrike Von Luxburg, Volker H. Franz

We explicitly construct the individual classifiers that attain the upper and lower bounds: specialists and generalists.

Classification

Looking Deeper into Tabular LIME

1 code implementation25 Aug 2020 Damien Garreau, Ulrike Von Luxburg

As an example, for linear functions we show that LIME has the desirable property to provide explanations that are proportional to the coefficients of the function to explain and to ignore coordinates that are not used by the function to explain.

BIG-bench Machine Learning

A Bandit Model for Human-Machine Decision Making with Private Information and Opacity

no code implementations9 Jul 2020 Sebastian Bordt, Ulrike Von Luxburg

A lower bound quantifies the worst-case hardness of optimally advising a decision maker who is opaque or has access to private information.

BIG-bench Machine Learning Decision Making

Explaining the Explainer: A First Theoretical Analysis of LIME

no code implementations10 Jan 2020 Damien Garreau, Ulrike Von Luxburg

We derive closed-form expressions for the coefficients of the interpretable model when the function to explain is linear.

Decision Making

Insights into Ordinal Embedding Algorithms: A Systematic Evaluation

no code implementations3 Dec 2019 Leena Chennuru Vankadara, Siavash Haghiri, Michael Lohaus, Faiz Ul Wahab, Ulrike Von Luxburg

However, there does not exist a fair and thorough assessment of these embedding methods and therefore several key questions remain unanswered: Which algorithms perform better when the embedding dimension is constrained or few triplet comparisons are available?

Representation Learning

LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS

no code implementations25 Sep 2019 Siavash Haghiri, Leena Chennuru Vankadara, Ulrike Von Luxburg

This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding".

Representation Learning

Estimation of perceptual scales using ordinal embedding

no code implementations21 Aug 2019 Siavash Haghiri, Felix Wichmann, Ulrike Von Luxburg

We propose to use ordinal embedding methods from machine learning to estimate the scaling function from the relative judgments.

Uncertainty Estimates for Ordinal Embeddings

no code implementations27 Jun 2019 Michael Lohaus, Philipp Hennig, Ulrike Von Luxburg

To investigate objects without a describable notion of distance, one can gather ordinal information by asking triplet comparisons of the form "Is object $x$ closer to $y$ or is $x$ closer to $z$?"

Comparison-Based Framework for Psychophysics: Lab versus Crowdsourcing

no code implementations17 May 2019 Siavash Haghiri, Patricia Rubisch, Robert Geirhos, Felix Wichmann, Ulrike Von Luxburg

In this paper we study whether the use of comparison-based (ordinal) data, combined with machine learning algorithms, can boost the reliability of crowdsourcing studies for psychophysics, such that they can achieve performance close to a lab experiment.

BIG-bench Machine Learning

When do random forests fail?

no code implementations NeurIPS 2018 Cheng Tang, Damien Garreau, Ulrike Von Luxburg

As a consequence, even highly randomized trees can lead to inconsistent forests if no subsampling is used, which implies that some of the commonly used setups for random forests can be inconsistent.

Measures of distortion for machine learning

no code implementations NeurIPS 2018 Leena Chennuru Vankadara, Ulrike Von Luxburg

We can show both in theory and in experiments that it satisfies all desirable properties and is a better candidate to evaluate distortion in the context of machine learning.

BIG-bench Machine Learning

Practical methods for graph two-sample testing

1 code implementation NeurIPS 2018 Debarghya Ghoshdastidar, Ulrike Von Luxburg

Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs.

Learning Theory Open-Ended Question Answering +2

Foundations of Comparison-Based Hierarchical Clustering

1 code implementation NeurIPS 2019 Debarghya Ghoshdastidar, Michaël Perrot, Ulrike Von Luxburg

We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities.

Clustering

Boosting for Comparison-Based Learning

no code implementations31 Oct 2018 Michaël Perrot, Ulrike Von Luxburg

We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form "object $x_i$ is closer to object $x_j$ than to object $x_k$."

Object

Comparison-Based Random Forests

no code implementations ICML 2018 Siavash Haghiri, Damien Garreau, Ulrike Von Luxburg

Assume we are given a set of items from a general metric space, but we neither have access to the representation of the data nor to the distances between data points.

General Classification regression

Design and Analysis of the NIPS 2016 Review Process

1 code implementation31 Aug 2017 Nihar B. Shah, Behzad Tabibian, Krikamol Muandet, Isabelle Guyon, Ulrike Von Luxburg

Neural Information Processing Systems (NIPS) is a top-tier annual conference in machine learning.

Two-sample Hypothesis Testing for Inhomogeneous Random Graphs

no code implementations4 Jul 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

Given a population of $m$ graphs from each model, we derive minimax separation rates for the problem of testing $P=Q$ against $d(P, Q)>\rho$.

Two-sample testing Vocal Bursts Valence Prediction

Two-Sample Tests for Large Random Graphs Using Network Statistics

no code implementations17 May 2017 Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

We consider a two-sample hypothesis testing problem, where the distributions are defined on the space of undirected graphs, and one has access to only one observation from each model.

Two-sample testing Vocal Bursts Valence Prediction

Comparison Based Nearest Neighbor Search

no code implementations5 Apr 2017 Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike Von Luxburg

We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points.

Kernel functions based on triplet comparisons

no code implementations NeurIPS 2017 Matthäus Kleindessner, Ulrike Von Luxburg

Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set.

Object

Lens depth function and k-relative neighborhood graph: versatile tools for ordinal data analysis

no code implementations23 Feb 2016 Matthäus Kleindessner, Ulrike Von Luxburg

In recent years it has become popular to study machine learning problems in a setting of ordinal distance information rather than numerical distance measurements.

BIG-bench Machine Learning Clustering

Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines

no code implementations2 Jun 2015 Mehdi S. M. Sajjadi, Morteza Alamgir, Ulrike Von Luxburg

Peer grading is the process of students reviewing each others' work, such as homework submissions, and has lately become a popular mechanism used in massive open online courses (MOOCs).

Consistent procedures for cluster tree estimation and pruning

no code implementations5 Jun 2014 Kamalika Chaudhuri, Sanjoy Dasgupta, Samory Kpotufe, Ulrike Von Luxburg

For a density $f$ on ${\mathbb R}^d$, a {\it high-density cluster} is any connected component of $\{x: f(x) \geq \lambda\}$, for some $\lambda > 0$.

Clustering

A Tutorial on Spectral Clustering

4 code implementations1 Nov 2007 Ulrike von Luxburg

In recent years, spectral clustering has become one of the most popular modern clustering algorithms.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.