Search Results for author: Marc Christiansen

Found 1 papers, 0 papers with code

How Faithful are Self-Explainable GNNs?

no code implementations29 Aug 2023 Marc Christiansen, Lea Villadsen, Zhiqiang Zhong, Stefano Teso, Davide Mottin

Self-explainable deep neural networks are a recent class of models that can output ante-hoc local explanations that are faithful to the model's reasoning, and as such represent a step forward toward filling the gap between expressiveness and interpretability.

Cannot find the paper you are looking for? You can Submit a new open access paper.