Search Results for author: Sascha Marton

Found 11 papers, 7 papers with code

Which LIME should I trust? Concepts, Challenges, and Solutions

no code implementations31 Mar 2025 Patrick Knab, Sascha Marton, Udo Schlegel, Christian Bartelt

As neural networks become dominant in essential systems, Explainable Artificial Intelligence (XAI) plays a crucial role in fostering trust and detecting potential misbehavior of opaque models.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2

Decision Trees That Remember: Gradient-Based Learning of Recurrent Decision Trees with Memory

no code implementations6 Feb 2025 Sascha Marton, Moritz Schneider

Neural architectures such as Recurrent Neural Networks (RNNs), Transformers, and State-Space Models have shown great success in handling sequential data by learning temporal dependencies.

Feature Engineering State Space Models

Aligning Visual and Semantic Interpretability through Visually Grounded Concept Bottleneck Models

1 code implementation16 Dec 2024 Patrick Knab, Katharina Prasse, Sascha Marton, Christian Bartelt, Margret Keuper

We introduce visually Grounded Concept Bottleneck Models (GCBM), which derive concepts on the image level using segmentation and detection foundation models.

Specificity

Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models

1 code implementation12 Mar 2024 Patrick Knab, Sascha Marton, Christian Bartelt

To address these challenges, we introduce the DSEG-LIME (Data-Driven Segmentation LIME) framework, featuring: i) a data-driven segmentation for human-recognized feature generation by foundation model integration, and ii) a user-steered granularity in the hierarchical segmentation procedure through composition.

Decision Making Explainable artificial intelligence +4

GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data

3 code implementations29 Sep 2023 Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt

Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization.

GradTree: Learning Axis-Aligned Decision Trees with Gradient Descent

1 code implementation5 May 2023 Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt

Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability.

Binary Classification

Explaining Neural Networks without Access to Training Data

1 code implementation10 Jun 2022 Sascha Marton, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, Heiner Stuckenschmidt

We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues.

xRAI: Explainable Representations through AI

no code implementations10 Dec 2020 Christiann Bartelt, Sascha Marton, Heiner Stuckenschmidt

The approach is based on the idea of training a so-called interpretation network that receives the weights and biases of the trained network as input and outputs the numerical representation of the function the network was supposed to learn that can be directly translated into a symbolic representation.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.