Search Results for author: Stefano Woerner

Found 4 papers, 3 papers with code

Navigating Data Scarcity using Foundation Models: A Benchmark of Few-Shot and Zero-Shot Learning Approaches in Medical Imaging

1 code implementation15 Aug 2024 Stefano Woerner, Christian F. Baumgartner

Our results indicate that BiomedCLIP, a model pretrained exclusively on medical data, performs best on average for very small training set sizes, while very large CLIP models pretrained on LAION-2B perform best with slightly more training samples.

Few-Shot Learning Medical Image Analysis +1

Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals

1 code implementation8 Jun 2024 Susu Sun, Stefano Woerner, Andreas Maier, Lisa M. Koch, Christian F. Baumgartner

Attri-Net first counterfactually generates class-specific attribution maps to highlight the disease evidence, then performs classification with logistic regression classifiers based solely on the attribution maps.

Clinical Knowledge Multi-Label Classification

A comprehensive and easy-to-use multi-domain multi-task medical imaging meta-dataset (MedIMeta)

no code implementations24 Apr 2024 Stefano Woerner, Arthur Jaques, Christian F. Baumgartner

While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets.

cross-domain few-shot learning Medical Image Analysis

Inherently Interpretable Multi-Label Classification Using Class-Specific Counterfactuals

2 code implementations1 Mar 2023 Susu Sun, Stefano Woerner, Andreas Maier, Lisa M. Koch, Christian F. Baumgartner

Furthermore, as we show in this paper, current explanation techniques do not perform adequately in the multi-label scenario, in which multiple medical findings may co-occur in a single image.

Classification Clinical Knowledge +2

Cannot find the paper you are looking for? You can Submit a new open access paper.