no code implementations • 9 Jul 2020 • Sebastian Bordt, Ulrike Von Luxburg
A lower bound quantifies the worst-case hardness of optimally advising a decision maker who is opaque or has access to private information.
no code implementations • 18 Oct 2021 • Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar
Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.
no code implementations • 15 Jun 2022 • Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike Von Luxburg
We propose a necessary criterion: their feature attributions need to be aligned with the tangent space of the data manifold.
no code implementations • 5 Feb 2024 • Sebastian Bordt, Ulrike Von Luxburg
In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used.
1 code implementation • NeurIPS 2023 • Suraj Srinivas, Sebastian Bordt, Hima Lakkaraju
Quantifying the perceptual alignment of model gradients via their similarity with the gradients of generative models, we show that off-manifold robustness correlates well with perceptual alignment.
1 code implementation • 25 Jan 2022 • Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike Von Luxburg
In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.
1 code implementation • 8 Sep 2022 • Sebastian Bordt, Ulrike Von Luxburg
We then show that $n$-Shapley Values, as well as the Shapley Taylor- and Faith-Shap interaction indices, recover GAMs with interaction terms up to order $n$.
1 code implementation • 11 Mar 2024 • Sebastian Bordt, Harsha Nori, Rich Caruana
While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over.
1 code implementation • 8 Mar 2023 • Sebastian Bordt, Ulrike Von Luxburg
We asked ChatGPT to participate in an undergraduate computer science exam on ''Algorithms and Data Structures''.
1 code implementation • 2 Aug 2023 • Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana
We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components.
1 code implementation • 22 Feb 2024 • Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana
Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans.
2 code implementations • 9 Apr 2024 • Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana
We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training.