Search Results for author: Jakob Nikolas Kather

Found 34 papers, 20 papers with code

LLM Agents Making Agent Tools

no code implementations17 Feb 2025 Georg Wölflein, Dyke Ferber, Daniel Truhn, Ognjen Arandjelović, Jakob Nikolas Kather

Tool use has turned large language models (LLMs) into powerful agents that can perform complex multi-step tasks by dynamically utilising external software components.

Abnormality-Driven Representation Learning for Radiology Imaging

no code implementations25 Nov 2024 Marta Ligero, Tim Lenz, Georg Wölflein, Omar S. M. El Nahhas, Daniel Truhn, Jakob Nikolas Kather

To date, the most common approach for radiology deep learning pipelines is the use of end-to-end 3D networks based on models pre-trained on other tasks, followed by fine-tuning on the task at hand.

Benchmarking Contrastive Learning +3

Medical Slice Transformer: Improved Diagnosis and Explainability on 3D Medical Images with DINOv2

1 code implementation24 Nov 2024 Gustav Müller-Franzes, Firas Khader, Robert Siepmann, Tianyu Han, Jakob Nikolas Kather, Sven Nebelung, Daniel Truhn

We introduce the Medical Slice Transformer (MST) framework to adapt 2D self-supervised models for 3D medical image analysis.

Ranked #5 on Lung Nodule Classification on LIDC-IDRI (AUC metric, using extra training data)

Classification Diagnostic +3

Unsupervised Foundation Model-Agnostic Slide-Level Representation Learning

1 code implementation20 Nov 2024 Tim Lenz, Peter Neidlinger, Marta Ligero, Georg Wölflein, Marko van Treeck, Jakob Nikolas Kather

Existing approaches for slide representation learning extend the principles of SSL from patch level learning to entire slides by aligning different augmentations of the slide or by utilizing multimodal data.

Mamba model +4

On Instabilities of Unsupervised Denoising Diffusion Models in Magnetic Resonance Imaging Reconstruction

no code implementations23 Jun 2024 Tianyu Han, Sven Nebelung, Firas Khader, Jakob Nikolas Kather, Daniel Truhn

Denoising diffusion models offer a promising approach to accelerating magnetic resonance imaging (MRI) and producing diagnostic-level images in an unsupervised manner.

Denoising Diagnostic +1

Reducing self-supervised learning complexity improves weakly-supervised classification performance in computational pathology

no code implementations7 Mar 2024 Tim Lenz, Omar S. M. El Nahhas, Marta Ligero, Jakob Nikolas Kather

Specifically, we analyzed the effects of adaptations in data volume, architecture, and algorithms on downstream classification tasks, emphasizing their impact on computational resources.

Classification Self-Supervised Learning +1

Unconditional Latent Diffusion Models Memorize Patient Imaging Data: Implications for Openly Sharing Synthetic Data

1 code implementation1 Feb 2024 Salman Ul Hassan Dar, Marvin Seyfarth, Isabelle Ayx, Theano Papavassiliu, Stefan O. Schoenberg, Robert Malte Siepmann, Fabian Christopher Laqua, Jannik Kahmann, Norbert Frey, Bettina Baeßler, Sebastian Foersch, Daniel Truhn, Jakob Nikolas Kather, Sandy Engelhardt

Collectively, our results emphasize the importance of carefully training generative models on private medical imaging datasets, and examining the synthetic data to ensure patient privacy before sharing it for medical research and applications.

Copy Detection Data Augmentation +3

LongHealth: A Question Answering Benchmark with Long Clinical Documents

1 code implementation25 Jan 2024 Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL. Aerts, Jakob Nikolas Kather, Daniel Truhn, Keno Bressem

However, all models struggled significantly in tasks requiring the identification of missing information, highlighting a critical area for improvement in clinical data interpretation.

Information Retrieval Multiple-choice +2

Large Language Models Streamline Automated Machine Learning for Clinical Studies

1 code implementation27 Aug 2023 Soroosh Tayebi Arasteh, Tianyu Han, Mahshad Lotfinia, Christiane Kuhl, Jakob Nikolas Kather, Daniel Truhn, Sven Nebelung

A knowledge gap persists between machine learning (ML) developers (e. g., data scientists) and practitioners (e. g., clinicians), hampering the full utilization of ML for clinical data analysis.

Enhancing Network Initialization for Medical AI Models Using Large-Scale, Unlabeled Natural Images

2 code implementations15 Aug 2023 Soroosh Tayebi Arasteh, Leo Misera, Jakob Nikolas Kather, Daniel Truhn, Sven Nebelung

In this study, we explored if SSL for pre-training on non-medical images can be applied to chest radiographs and how it compares to supervised pre-training on non-medical images and on medical images.

Diagnostic Medical Diagnosis +3

Cascaded Cross-Attention Networks for Data-Efficient Whole-Slide Image Classification Using Transformers

no code implementations11 May 2023 Firas Khader, Jakob Nikolas Kather, Tianyu Han, Sven Nebelung, Christiane Kuhl, Johannes Stegmaier, Daniel Truhn

However, while the conventional transformer allows for a simultaneous processing of a large set of input tokens, the computational demand scales quadratically with the number of input tokens and thus quadratically with the number of image patches.

Image Classification whole slide images

Regression-based Deep-Learning predicts molecular biomarkers from pathology slides

1 code implementation11 Apr 2023 Omar S. M. El Nahhas, Chiara M. L. Loeffler, Zunamys I. Carrero, Marko van Treeck, Fiona R. Kolbinger, Katherine J. Hewitt, Hannah S. Muti, Mara Graziani, Qinghe Zeng, Julien Calderaro, Nadina Ortiz-Brüchle, Tanwei Yuan, Michael Hoffmeister, Hermann Brenner, Alexander Brobeil, Jorge S. Reis-Filho, Jakob Nikolas Kather

We tested our method for multiple clinically and biologically relevant biomarkers: homologous repair deficiency (HRD) score, a clinically used pan-cancer biomarker, as well as markers of key biological processes in the tumor microenvironment.

Classification Deep Learning +1

Medical Diffusion: Denoising Diffusion Probabilistic Models for 3D Medical Image Generation

1 code implementation7 Nov 2022 Firas Khader, Gustav Mueller-Franzes, Soroosh Tayebi Arasteh, Tianyu Han, Christoph Haarburger, Maximilian Schulze-Hagen, Philipp Schad, Sandy Engelhardt, Bettina Baessler, Sebastian Foersch, Johannes Stegmaier, Christiane Kuhl, Sven Nebelung, Jakob Nikolas Kather, Daniel Truhn

Furthermore, we demonstrate that synthetic images can be used in a self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (dice score 0. 91 vs. 0. 95 without vs. with synthetic data).

Computed Tomography (CT) Denoising +3

Cannot find the paper you are looking for? You can Submit a new open access paper.