no code implementations • 16 May 2024 • Milda Pocevičiūtė, Gabriel Eilertsen, Stina Garvin, Claes Lundström
Our contributions include showing that MIL for digital pathology is affected by clinically realistic differences in data, evaluating which features from a MIL model are most suitable for detecting changes in performance, and proposing an unsupervised metric named Fr\'echet Domain Distance (FDD) for quantification of domain shifts.
no code implementations • 17 Dec 2021 • Milda Pocevičiūtė, Gabriel Eilertsen, Sofia Jarkman, Claes Lundström
In this work we evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications, by boosting the general predictive performance or by detecting mispredictions.
1 code implementation • 10 Dec 2021 • Karin Stacke, Jonas Unger, Claes Lundström, Gabriel Eilertsen
We bring forward a number of considerations, such as view generation for the contrastive objective and hyper-parameter tuning.
no code implementations • 23 Apr 2021 • Gabriel Eilertsen, Apostolia Tsirikoglou, Claes Lundström, Jonas Unger
This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data.
no code implementations • 16 Mar 2021 • Milda Pocevičiūtė, Gabriel Eilertsen, Claes Lundström
Machine learning (ML) algorithms are optimized for the distribution represented by the training data.
no code implementations • 14 Aug 2020 • Milda Pocevičiūtė, Gabriel Eilertsen, Claes Lundström
We present a survey on XAI within digital pathology, a medical imaging sub-discipline with particular characteristics and needs.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+1
1 code implementation • 25 Sep 2019 • Karin Stacke, Gabriel Eilertsen, Jonas Unger, Claes Lundström
Most centrally, we present a novel measure for evaluating the distance between domains in the context of the learned representation of a particular model.