Counterfactual Explanation
73 papers with code • 0 benchmarks • 1 datasets
Returns a contrastive argument that permits to achieve the desired class, e.g., “to obtain this loan, you need XXX of annual revenue instead of the current YYY”
Benchmarks
These leaderboards are used to track progress in Counterfactual Explanation
Most implemented papers
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual explanation methods, and (iii) a standardized set of integrated evaluation measures and data sets for transparent and extensive comparisons of these methods.
Counterfactual Explanation Algorithms for Behavioral and Textual Data
This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.
Explaining Groups of Points in Low-Dimensional Representations
A common workflow in data exploration is to learn a low-dimensional representation of the data, identify groups of points in that representation, and examine the differences between the groups to determine what they represent.
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End
In addition, by restricting the features that can be modified for generating counterfactual examples, we find that the top-k features from LIME or SHAP are often neither necessary nor sufficient explanations of a model's prediction.
Counterfactual Explainable Recommendation
Technically, for each item recommended to each user, CountER formulates a joint optimization problem to generate minimal changes on the item aspects so as to create a counterfactual item, such that the recommendation decision on the counterfactual item is reversed.
Counterfactual Shapley Additive Explanations
Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
Since CEs typically prescribe a sparse form of intervention (i. e., only a subset of the features should be changed), we study the effect of addressing robustness separately for the features that are recommended to be changed and those that are not.
OmniXAI: A Library for Explainable AI
We introduce OmniXAI (short for Omni eXplainable AI), an open-source Python library of eXplainable AI (XAI), which offers omni-way explainable AI capabilities and various interpretable machine learning techniques to address the pain points of understanding and interpreting the decisions made by machine learning (ML) in practice.
VCNet: A self-explaining model for realistic counterfactual generation
Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class.
PermuteAttack: Counterfactual Explanation of Machine Learning Credit Scorecards
We propose a model criticism and explanation framework based on adversarially generated counterfactual examples for tabular data.