Search Results for author: Nina Schaaf

Found 3 papers, 1 papers with code

Mixture of Decision Trees for Interpretable Machine Learning

1 code implementation26 Nov 2022 Simeon Brüggenjürgen, Nina Schaaf, Pascal Kerschke, Marco F. Huber

This work introduces a novel interpretable machine learning method called Mixture of Decision Trees (MoDT).

Interpretable Machine Learning

Towards Measuring Bias in Image Classification

no code implementations1 Jul 2021 Nina Schaaf, Omar de Mitri, Hang Beom Kim, Alexander Windberger, Marco F. Huber

For this purpose, first an artificial dataset with a known bias is created and used to train intentionally biased CNNs.

Classification Image Classification

Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

no code implementations10 Apr 2019 Nina Schaaf, Marco F. Huber, Johannes Maucher

One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability.

Cannot find the paper you are looking for? You can Submit a new open access paper.