Interpretability

General • 14 methods

Interpretability Methods seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.