Search Results for author: Sebastian Bordt

Found 12 papers, 8 papers with code

Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models

2 code implementations9 Apr 2024 Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana

We then compare the few-shot learning performance of LLMs on datasets that were seen during training to the performance on datasets released after training.

Few-Shot Learning Language Modelling +2

Elephants Never Forget: Testing Language Models for Memorization of Tabular Data

1 code implementation11 Mar 2024 Sebastian Bordt, Harsha Nori, Rich Caruana

While many have shown how Large Language Models (LLMs) can be applied to a diverse set of tasks, the critical issues of data contamination and memorization are often glossed over.

Language Modelling Memorization

Data Science with LLMs and Interpretable Models

1 code implementation22 Feb 2024 Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana

Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans.

Additive models Question Answering

Statistics without Interpretation: A Sober Look at Explainable Machine Learning

no code implementations5 Feb 2024 Sebastian Bordt, Ulrike Von Luxburg

In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used.

LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

1 code implementation2 Aug 2023 Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana

We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components.

Additive models

Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness

1 code implementation NeurIPS 2023 Suraj Srinivas, Sebastian Bordt, Hima Lakkaraju

Quantifying the perceptual alignment of model gradients via their similarity with the gradients of generative models, we show that off-manifold robustness correlates well with perceptual alignment.

Denoising Image Generation

ChatGPT Participates in a Computer Science Exam

1 code implementation8 Mar 2023 Sebastian Bordt, Ulrike Von Luxburg

We asked ChatGPT to participate in an undergraduate computer science exam on ''Algorithms and Data Structures''.

From Shapley Values to Generalized Additive Models and back

1 code implementation8 Sep 2022 Sebastian Bordt, Ulrike Von Luxburg

We then show that $n$-Shapley Values, as well as the Shapley Taylor- and Faith-Shap interaction indices, recover GAMs with interaction terms up to order $n$.

Additive models

The Manifold Hypothesis for Gradient-Based Explanations

no code implementations15 Jun 2022 Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike Von Luxburg

We propose a necessary criterion: their feature attributions need to be aligned with the tangent space of the data manifold.

Diabetic Retinopathy Detection

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

1 code implementation25 Jan 2022 Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike Von Luxburg

In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.

Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models

no code implementations18 Oct 2021 Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

Clustering

A Bandit Model for Human-Machine Decision Making with Private Information and Opacity

no code implementations9 Jul 2020 Sebastian Bordt, Ulrike Von Luxburg

A lower bound quantifies the worst-case hardness of optimally advising a decision maker who is opaque or has access to private information.

BIG-bench Machine Learning Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.