In this work, we assess attribution methods from a perspective not previously explored in the graph domain: retraining.
This survey explores the landscape of the adversarial transferability of adversarial examples.
Feature attribution explains neural network outputs by identifying relevant input features.
We apply these metrics to mainstream attribution methods, offering a novel lens through which to analyze and compare feature attribution methods.
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making.
Deep learning models used in medical image analysis are prone to raising reliability concerns due to their black-box nature.
One challenging property lurking in medical datasets is the imbalanced data distribution, where the frequency of the samples between the different classes is not balanced.
1 code implementation • 30 Mar 2022 • Paul Engstler, Matthias Keicher, David Schinz, Kristina Mach, Alexandra S. Gersing, Sarah C. Foreman, Sophia S. Goller, Juergen Weissinger, Jon Rischewski, Anna-Sophia Dietrich, Benedikt Wiestler, Jan S. Kirschke, Ashkan Khakzar, Nassir Navab
Do black-box neural network models learn clinically relevant features for fracture diagnosis?
The automation of chest X-ray reporting has garnered significant interest due to the time-consuming nature of the task.
We propose a method to identify features with predictive information in the input domain.
We present our findings using publicly available chest pathologies (CheXpert, NIH ChestX-ray8) and COVID-19 datasets (BrixIA, and COVID-19 chest X-ray segmentation dataset).
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Is critical input information encoded in specific sparse pathways within the neural network?
Chest computed tomography (CT) has played an essential diagnostic role in assessing patients with COVID-19 by showing disease-specific image features such as ground-glass opacity and consolidation.
In this work, we empirically show that two approaches for handling the gradient information, namely positive aggregation, and positive propagation, break these methods.
Our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation.
Attributing the output of a neural network to the contribution of given input elements is a way of shedding light on the black-box nature of neural networks.