no code implementations • 3 Feb 2023 • Miriam Hägele, Johannes Eschrich, Lukas Ruff, Maximilian Alber, Simon Schallenberg, Adrien Guillot, Christoph Roderburg, Frank Tacke, Frederick Klauschen
Motivated by the medical application, we demonstrate for general segmentation tasks that including additional patches with solely weak complementary labels during model training can significantly improve the predictive performance and robustness of a model.
no code implementations • 14 Jan 2020 • Stephanie Brandl, David Lassner, Maximilian Alber
Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications.
2 code implementations • NeurIPS 2019 • Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel
Explanation methods aim to make neural networks more trustworthy and interpretable.
1 code implementation • 9 Apr 2019 • Maximilian Alber
Building on this we show how explanation methods can be used in applications to understand predictions for miss-classified samples, to compare algorithms or networks, and to examine the focus of networks.
1 code implementation • 13 Aug 2018 • Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.
no code implementations • 8 Aug 2018 • Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le
The back-propagation algorithm is the cornerstone of deep learning.
no code implementations • NeurIPS 2017 • Maximilian Alber, Pieter-Jan Kindermans, Kristof Schütt, Klaus-Robert Müller, Fei Sha
Kernel machines as well as neural networks possess universal function approximation properties.
1 code implementation • ICLR 2018 • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Saliency methods aim to explain the predictions of deep neural networks.
3 code implementations • ICLR 2018 • Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne
We show that these methods do not produce the theoretically correct explanation for a linear model.
1 code implementation • 25 Nov 2016 • Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.