Motivated by the medical application, we demonstrate for general segmentation tasks that including additional patches with solely weak complementary labels during model training can significantly improve the predictive performance and robustness of a model.
Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications.
Building on this we show how explanation methods can be used in applications to understand predictions for miss-classified samples, to compare algorithms or networks, and to examine the focus of networks.
1 code implementation • 13 Aug 2018 • Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.
Saliency methods aim to explain the predictions of deep neural networks.
We show that these methods do not produce the theoretically correct explanation for a linear model.
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.