Search Results for author: Claudio Fanconi

Found 2 papers, 2 papers with code

This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks

1 code implementation5 May 2021 Adrian Hoffmann, Claudio Fanconi, Rahul Rade, Jonas Kohler

Deep neural networks that yield human interpretable decisions by architectural design have lately become an increasingly popular alternative to post hoc interpretation of traditional black-box models.

Explainable artificial intelligence Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.