no code implementations • 31 Mar 2025 • Patrick Knab, Sascha Marton, Udo Schlegel, Christian Bartelt
As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • 6 Feb 2025 • Sascha Marton, Moritz Schneider
Neural architectures such as Recurrent Neural Networks (RNNs), Transformers, and State-Space Models have shown great success in handling sequential data by learning temporal dependencies.
1 code implementation • 16 Dec 2024 • Patrick Knab, Katharina Prasse, Sascha Marton, Christian Bartelt, Margret Keuper
We introduce visually Grounded Concept Bottleneck Models (GCBM), which derive concepts on the image level using segmentation and detection foundation models.
no code implementations • 3 Sep 2024 • Patrick Knab, Sascha Marton, Christian Bartelt, Robert Fuder
For outlier interpretation, we (i) adopt widely used XAI techniques to the autoencoder's encoder.
1 code implementation • 16 Aug 2024 • Sascha Marton, Tim Grams, Florian Vogt, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
In this paper, we introduce SYMPOL, a novel method for SYMbolic tree-based on-POLicy RL.
1 code implementation • 2 Jul 2024 • Andrej Tschalzev, Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Our framework is available under: https://github. com/atschalz/dc_tabeval.
1 code implementation • 12 Mar 2024 • Patrick Knab, Sascha Marton, Christian Bartelt
To address these challenges, we introduce the DSEG-LIME (Data-Driven Segmentation LIME) framework, featuring: i) a data-driven segmentation for human-recognized feature generation by foundation model integration, and ii) a user-steered granularity in the hierarchical segmentation procedure through composition.
3 code implementations • 29 Sep 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization.
1 code implementation • 5 May 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability.
1 code implementation • 10 Jun 2022 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, Heiner Stuckenschmidt
We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues.
no code implementations • 10 Dec 2020 • Christiann Bartelt, Sascha Marton, Heiner Stuckenschmidt
The approach is based on the idea of training a so-called interpretation network that receives the weights and biases of the trained network as input and outputs the numerical representation of the function the network was supposed to learn that can be directly translated into a symbolic representation.