Search Results for author: Martin Pawelczyk

Found 18 papers, 11 papers with code

Towards Non-Adversarial Algorithmic Recourse

no code implementations15 Mar 2024 Tobias Leemann, Martin Pawelczyk, Bardh Prenkaj, Gjergji Kasneci

We subsequently investigate how different components in the objective functions, e. g., the machine learning model or cost function used to measure distance, determine whether the outcome can be considered an adversarial example or not.

counterfactual Counterfactual Explanation

In-Context Unlearning: Language Models as Few Shot Unlearners

1 code implementation11 Oct 2023 Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

In this work, we propose a new class of unlearning methods for LLMs we call ''In-Context Unlearning'', providing inputs in context and without having to update model parameters.

Machine Unlearning

Gaussian Membership Inference Privacy

1 code implementation NeurIPS 2023 Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci

In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD).

Inference Attack Membership Inference Attack

On the Privacy Risks of Algorithmic Recourse

1 code implementation10 Nov 2022 Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel

As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals.

Decomposing Counterfactual Explanations for Consequential Decision Making

no code implementations3 Nov 2022 Martin Pawelczyk, Lea Tiyavorabun, Gjergji Kasneci

In this work, we develop \texttt{DEAR} (DisEntangling Algorithmic Recourse), a novel and practical recourse framework that bridges the gap between the IMF and the strong causal assumptions.

counterfactual Decision Making

I Prefer not to Say: Protecting User Consent in Models with Optional Personal Data

1 code implementation25 Oct 2022 Tobias Leemann, Martin Pawelczyk, Christian Thomas Eberle, Gjergji Kasneci

In this work, we show that the decision not to share data can be considered as information in itself that should be protected to respect users' privacy.

Data Augmentation Decision Making +1

On the Trade-Off between Actionable Explanations and the Right to be Forgotten

no code implementations30 Aug 2022 Martin Pawelczyk, Tobias Leemann, Asia Biega, Gjergji Kasneci

Thus, our work raises fundamental questions about the compatibility of "the right to an actionable explanation" in the context of the "right to be forgotten", while also providing constructive insights on the determining factors of recourse robustness.

OpenXAI: Towards a Transparent Evaluation of Model Explanations

2 code implementations22 Jun 2022 Chirag Agarwal, Dan Ley, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

OpenXAI comprises of the following key components: (i) a flexible synthetic data generator and a collection of diverse real-world datasets, pre-trained models, and state-of-the-art feature attribution methods, and (ii) open-source implementations of eleven quantitative metrics for evaluating faithfulness, stability (robustness), and fairness of explanation methods, in turn providing comparisons of several explanation methods across a wide variety of metrics, models, and datasets.

Benchmarking Explainable Artificial Intelligence (XAI) +1

Rethinking Stability for Attribution-based Explanations

no code implementations14 Mar 2022 Chirag Agarwal, Nari Johnson, Martin Pawelczyk, Satyapriya Krishna, Eshika Saxena, Marinka Zitnik, Himabindu Lakkaraju

As attribution-based explanation methods are increasingly used to establish model trustworthiness in high-stakes situations, it is critical to ensure that these explanations are stable, e. g., robust to infinitesimal perturbations to an input.

Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

1 code implementation13 Mar 2022 Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, Himabindu Lakkaraju

To this end, we propose a novel objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates, minimizes recourse costs, and also ensures that the resulting recourse achieves a positive model prediction.

Deep Neural Networks and Tabular Data: A Survey

2 code implementations5 Oct 2021 Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, Gjergji Kasneci

Moreover, we discuss deep learning approaches for generating tabular data, and we also provide an overview over strategies for explaining deep models on tabular data.

CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

4 code implementations2 Aug 2021 Martin Pawelczyk, Sascha Bielawski, Johannes van den Heuvel, Tobias Richter, Gjergji Kasneci

In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods.

Benchmarking counterfactual +1

Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis

no code implementations18 Jun 2021 Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, Himabindu Lakkaraju

As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice.

counterfactual Counterfactual Explanation

Gaussian Experts Selection using Graphical Models

no code implementations2 Feb 2021 Hamed Jalali, Martin Pawelczyk, Gjergji Kasneci

Imposing the \emph{conditional independence assumption} (CI) between the experts renders the aggregation of different expert predictions time efficient at the cost of poor uncertainty quantification.

Gaussian Processes Uncertainty Quantification

On Counterfactual Explanations under Predictive Multiplicity

no code implementations23 Jun 2020 Martin Pawelczyk, Klaus Broelemann, Gjergji Kasneci

In this work, we derive a general upper bound for the costs of counterfactual explanations under predictive multiplicity.

counterfactual

Learning Model-Agnostic Counterfactual Explanations for Tabular Data

3 code implementations21 Oct 2019 Martin Pawelczyk, Johannes Haug, Klaus Broelemann, Gjergji Kasneci

On one hand, we suggest to complement the catalogue of counterfactual quality measures [1] using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.