SHAP, or SHapley Additive exPlanations, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which uses DeepLift to approximate them.
Source: A Unified Approach to Interpreting Model PredictionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Feature Importance | 70 | 10.46% |
Explainable Artificial Intelligence (XAI) | 62 | 9.27% |
Explainable artificial intelligence | 59 | 8.82% |
Decision Making | 47 | 7.03% |
BIG-bench Machine Learning | 42 | 6.28% |
Management | 19 | 2.84% |
Fairness | 16 | 2.39% |
Interpretable Machine Learning | 16 | 2.39% |
Classification | 14 | 2.09% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |