no code implementations • 12 Mar 2024 • Patrick Knab, Sascha Marton, Christian Bartelt
Explainable Artificial Intelligence is critical in unraveling decision-making processes in complex machine learning models.
2 code implementations • 29 Sep 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization.
1 code implementation • 5 May 2023 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt
Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability.
1 code implementation • 10 Jun 2022 • Sascha Marton, Stefan Lüdtke, Christian Bartelt, Andrej Tschalzev, Heiner Stuckenschmidt
We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues.
no code implementations • 10 Dec 2020 • Christiann Bartelt, Sascha Marton, Heiner Stuckenschmidt
The approach is based on the idea of training a so-called interpretation network that receives the weights and biases of the trained network as input and outputs the numerical representation of the function the network was supposed to learn that can be directly translated into a symbolic representation.