1 code implementation • 21 Dec 2023 • Thomas Norrenbrock, Marco Rudolph, Bodo Rosenhahn
Explanations in Computer Vision are often desired, but most Deep Neural Networks can only provide saliency maps with questionable faithfulness.
Ranked #1 on Interpretable Machine Learning on CUB-200-2011
1 code implementation • 23 Mar 2023 • Thomas Norrenbrock, Marco Rudolph, Bodo Rosenhahn
We argue that a human can only understand the decision of a machine learning model, if the features are interpretable and only very few of them are used for a single decision.
Ranked #2 on Interpretable Machine Learning on CUB-200-2011
Fine-Grained Image Classification Interpretable Machine Learning