Interpretability

General • 17 methods

Interpretability Methods seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.

Method Year Papers
2017 525
2016 361
2015 258
2015 195
2023 76
2020 17
2017 11
2019 2
2022 2
2023 2
2018 1
2019 1
2020 1
1
2021 1
2022 1
2024 1