Search Results for author: Moritz Vandenhirtz

Found 3 papers, 2 papers with code

Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?

no code implementations24 Jan 2024 Ričards Marcinkevičs, Sonia Laguna, Moritz Vandenhirtz, Julia E. Vogt

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts.

Interpretable Machine Learning

This Reads Like That: Deep Learning for Interpretable Natural Language Processing

1 code implementation25 Oct 2023 Claudio Fanconi, Moritz Vandenhirtz, Severin Husmann, Julia E. Vogt

Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data.

Sentence Sentence Embeddings

Signal Is Harder To Learn Than Bias: Debiasing with Focal Loss

1 code implementation31 May 2023 Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt

We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.

Decision Making Domain Generalization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.