no code implementations • 26 Jan 2024 • Lukas Koller, Tobias Ladner, Matthias Althoff
Neural networks are vulnerable to adversarial attacks, i. e., small input perturbations can significantly affect the outputs of a neural network.
no code implementations • 6 Nov 2023 • Michael Gadermayr, Lukas Koller, Maximilian Tschuchnig, Lea Maria Stangassinger, Christina Kreutzer, Sebastien Couillard-Despres, Gertie Janneke Oostingh, Anton Hittmair
Here we conduct a large study incorporating 10 different data set configurations, two different feature extraction approaches (supervised and self-supervised), stain normalization and two multiple instance learning architectures.
1 code implementation • 10 Nov 2022 • Michael Gadermayr, Lukas Koller, Maximilian Tschuchnig, Lea Maria Stangassinger, Christina Kreutzer, Sebastien Couillard-Despres, Gertie Janneke Oostingh, Anton Hittmair
Multiple instance learning exhibits a powerful approach for whole slide image-based diagnosis in the absence of pixel- or patch-level annotations.
no code implementations • 25 Mar 2022 • Mohammad Abdulaziz, Lukas Koller
We derive from those semantics a validation algorithm for temporal planning and show, using a formal proof in Isabelle/HOL, that this validation algorithm implements our semantics.