1 code implementation • 12 Nov 2024 • Marvin Sextro, Gabriel Dernbach, Kai Standvoss, Simon Schallenberg, Frederick Klauschen, Klaus-Robert Müller, Maximilian Alber, Lukas Ruff
Understanding how deep learning models predict oncology patient risk can provide critical insights into disease progression, support clinical decision-making, and pave the way for trustworthy and data-driven precision medicine.
no code implementations • 21 Jun 2024 • Jonas Dippel, Niklas Prenißl, Julius Hense, Philipp Liznerski, Tobias Winterhoff, Simon Schallenberg, Marius Kloft, Oliver Buchstab, David Horst, Maximilian Alber, Lukas Ruff, Klaus-Robert Müller, Frederick Klauschen
Without any specific training for the diseases, our best-performing model reliably detected a broad spectrum of infrequent ("anomalous") pathologies with 95. 0% (stomach) and 91. 0% (colon) AUROC and generalized across scanners and hospitals.
no code implementations • 8 Jan 2024 • Jonas Dippel, Barbara Feulner, Tobias Winterhoff, Timo Milbich, Stephan Tietz, Simon Schallenberg, Gabriel Dernbach, Andreas Kunft, Simon Heinke, Marie-Lisa Eich, Julika Ribbat-Idel, Rosemarie Krupar, Philipp Anders, Niklas Prenißl, Philipp Jurmeister, David Horst, Lukas Ruff, Klaus-Robert Müller, Frederick Klauschen, Maximilian Alber
Artificial intelligence has started to transform histopathology impacting clinical diagnostics and biomedical research.
no code implementations • 3 Feb 2023 • Miriam Hägele, Johannes Eschrich, Lukas Ruff, Maximilian Alber, Simon Schallenberg, Adrien Guillot, Christoph Roderburg, Frank Tacke, Frederick Klauschen
Motivated by the medical application, we demonstrate for general segmentation tasks that including additional patches with solely weak complementary labels during model training can significantly improve the predictive performance and robustness of a model.
no code implementations • 14 Jan 2020 • Stephanie Brandl, David Lassner, Maximilian Alber
Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications.
2 code implementations • NeurIPS 2019 • Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel
Explanation methods aim to make neural networks more trustworthy and interpretable.
1 code implementation • 9 Apr 2019 • Maximilian Alber
Building on this we show how explanation methods can be used in applications to understand predictions for miss-classified samples, to compare algorithms or networks, and to examine the focus of networks.
1 code implementation • 13 Aug 2018 • Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.
no code implementations • 8 Aug 2018 • Maximilian Alber, Irwan Bello, Barret Zoph, Pieter-Jan Kindermans, Prajit Ramachandran, Quoc Le
The back-propagation algorithm is the cornerstone of deep learning.
no code implementations • NeurIPS 2017 • Maximilian Alber, Pieter-Jan Kindermans, Kristof Schütt, Klaus-Robert Müller, Fei Sha
Kernel machines as well as neural networks possess universal function approximation properties.
1 code implementation • ICLR 2018 • Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
Saliency methods aim to explain the predictions of deep neural networks.
4 code implementations • ICLR 2018 • Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne
We show that these methods do not produce the theoretically correct explanation for a linear model.
1 code implementation • 25 Nov 2016 • Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.