Interpretability

Disentangled Attribution Curves

Introduced by Devlin et al. in Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees

Disentangled Attribution Curves (DAC) provide interpretations of tree ensemble methods in the form of (multivariate) feature importance curves. For a given variable, or group of variables, DAC plots the importance of a variable(s) as their value changes.

The Figure to the right shows an example. The tree depicts a decision tree which performs binary classification using two features (representing the XOR function). In this problem, knowing the value of one of the features without knowledge of the other feature yields no information - the classifier still has a 50% chance of predicting either class. As a result, DAC produces curves which assign 0 importance to either feature on its own. Knowing both features yields perfect information about the classifier, and thus the DAC curve for both features together correctly shows that the interaction of the features produces the model’s predictions.

Source: Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Feature Engineering 1 33.33%
Feature Importance 1 33.33%
Interpretable Machine Learning 1 33.33%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories