no code implementations • 24 Jan 2024 • Ričards Marcinkevičs, Sonia Laguna, Moritz Vandenhirtz, Julia E. Vogt
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts.
1 code implementation • 31 May 2023 • Moritz Vandenhirtz, Laura Manduchi, Ričards Marcinkevičs, Julia E. Vogt
We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss.
1 code implementation • 28 Feb 2023 • Ričards Marcinkevičs, Patricia Reis Wolfertstetter, Ugne Klimiene, Kieran Chin-Cheong, Alyssia Paschke, Julia Zerres, Markus Denzinger, David Niederberger, Sven Wellmann, Ece Ozkan, Christian Knorr, Julia E. Vogt
Appendicitis is among the most frequent reasons for pediatric abdominal surgeries.
no code implementations • 23 Dec 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets.
1 code implementation • 26 Jul 2022 • Ričards Marcinkevičs, Ece Ozkan, Julia E. Vogt
In addition, we compare several intra- and post-processing approaches applied to debiasing deep chest X-ray classifiers.
1 code implementation • ICLR 2022 • Laura Manduchi, Ričards Marcinkevičs, Michela C. Massi, Thomas Weikert, Alexander Sauter, Verena Gotta, Timothy Müller, Flavio Vasella, Marian C. Neidert, Marc Pfister, Bram Stieltjes, Julia E. Vogt
In this work, we study the problem of clustering survival data $-$ a challenging and so far under-explored task.
1 code implementation • ICLR 2021 • Ričards Marcinkevičs, Julia E. Vogt
Exploratory analysis of time series data can yield a better understanding of complex dynamical systems.
no code implementations • 3 Dec 2020 • Ričards Marcinkevičs, Julia E. Vogt
In this review, we examine the problem of designing interpretable and explainable machine learning models.
Counterfactual Explanation Explainable artificial intelligence +1