no code implementations • 16 Jan 2024 • Manuel Tran, Amal Lahiani, Yashin Dicente Cid, Melanie Boxberg, Peter Lienemann, Christian Matek, Sophia J. Wagner, Fabian J. Theis, Eldad Klaiman, Tingying Peng
Vision Transformers (ViTs) and Swin Transformers (Swin) are currently state-of-the-art in computational pathology.
2 code implementations • 23 Jan 2023 • Sophia J. Wagner, Daniel Reisenbüchler, Nicholas P. West, Jan Moritz Niehues, Gregory Patrick Veldhuizen, Philip Quirke, Heike I. Grabsch, Piet A. van den Brandt, Gordon G. A. Hutchins, Susan D. Richman, Tanwei Yuan, Rupert Langer, Josien Christina Anna Jenniskens, Kelly Offermans, Wolfram Mueller, Richard Gray, Stephen B. Gruber, Joel K. Greenson, Gad Rennert, Joseph D. Bonner, Daniel Schmolze, Jacqueline A. James, Maurice B. Loughrey, Manuel Salto-Tellez, Hermann Brenner, Michael Hoffmeister, Daniel Truhn, Julia A. Schnabel, Melanie Boxberg, Tingying Peng, Jakob Nikolas Kather
Methods: In this study, we developed a new fully transformer-based pipeline for end-to-end biomarker prediction from pathology slides.
1 code implementation • 13 May 2022 • Daniel Reisenbüchler, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
Classical multiple instance learning (MIL) methods are often based on the identical and independent distributed assumption between instances, hence neglecting the potentially rich contextual information beyond individual entities.
1 code implementation • 14 Mar 2022 • Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
Evaluations of our framework on two public histopathological datasets show strong improvements in the case of sparse labels: for a H&E-stained colorectal cancer dataset, the accuracy increases by up to 9% compared to supervised cross-entropy loss; for a highly imbalanced dataset of single white blood cells from leukemia patient blood smears, the F1-score increases by up to 6%.
1 code implementation • 26 Jul 2021 • Sophia J. Wagner, Nadieh Khalili, Raghav Sharma, Melanie Boxberg, Carsten Marr, Walter de Back, Tingying Peng
Alternatively, color augmentation can be applied during training leading to a more robust model without the extra step of color normalization at test time.