1 code implementation • 29 Nov 2024 • Tong Ding, Sophia J. Wagner, Andrew H. Song, Richard J. Chen, Ming Y. Lu, Andrew Zhang, Anurag J. Vaidya, Guillaume Jaume, Muhammad Shaban, Ahrong Kim, Drew F. K. Williamson, Bowen Chen, Cristina Almagro-Perez, Paul Doucet, Sharifa Sahai, Chengkuan Chen, Daisuke Komura, Akihiro Kawabe, Shumpei Ishikawa, Georg Gerber, Tingying Peng, Long Phi Le, Faisal Mahmood
The field of computational pathology has been transformed with recent advances in foundation models that encode histopathology region-of-interests (ROIs) into versatile and transferable feature representations via self-supervised learning (SSL).
1 code implementation • 30 Sep 2024 • Oytun Demirbilek, Tingying Peng, Alaa Bessadok
To meet these challenges, we propose the Graph Residual Noise Learner Network (Grenol-Net), the first graph diffusion model for predicting a target graph from a source graph.
1 code implementation • 17 Jul 2024 • Tomáš Chobola, Yu Liu, Hanyi Zhang, Julia A. Schnabel, Tingying Peng
Current deep learning-based low-light image enhancement methods often struggle with high-resolution images, and fail to meet the practical demands of visual perception across diverse and unseen scenarios.
1 code implementation • 7 Apr 2024 • Valentin Koch, Sophia J. Wagner, Salome Kazeminia, Ece Sancar, Matthias Hehr, Julia Schnabel, Tingying Peng, Carsten Marr
In hematology, computational models offer significant potential to improve diagnostic accuracy, streamline workflows, and reduce the tedious work of analyzing single cells in peripheral blood or bone marrow smears.
no code implementations • 16 Jan 2024 • Manuel Tran, Amal Lahiani, Yashin Dicente Cid, Melanie Boxberg, Peter Lienemann, Christian Matek, Sophia J. Wagner, Fabian J. Theis, Eldad Klaiman, Tingying Peng
Vision Transformers (ViTs) and Swin Transformers (Swin) are currently state-of-the-art in computational pathology.
1 code implementation • 9 Jan 2024 • Benedikt Roth, Valentin Koch, Sophia J. Wagner, Julia A. Schnabel, Carsten Marr, Tingying Peng
Recently, foundation models in computer vision showed that leveraging huge amounts of data through supervised or self-supervised learning improves feature quality and generalizability for a variety of tasks.
no code implementations • 21 Nov 2023 • Ziqi Yu, Botao Zhao, Shengjie Zhang, Xiang Chen, Jianfeng Feng, Tingying Peng, Xiao-Yong Zhang
To address these issues, we introduce hierarchical granularity discrimination, which exploits various levels of semantic information present in medical images.
1 code implementation • 3 Oct 2023 • Tomáš Chobola, Gesine Müller, Veit Dausmann, Anton Theileis, Jan Taucher, Jan Huisken, Tingying Peng
Our approach combines a pre-trained network to extract deep features from the input image with iterative Richardson-Lucy deconvolution steps.
no code implementations • 5 Sep 2023 • Yu Liu, Gesine Muller, Nassir Navab, Carsten Marr, Jan Huisken, Tingying Peng
Light-sheet fluorescence microscopy (LSFM), a planar illumination technique that enables high-resolution imaging of samples, experiences defocused image quality caused by light scattering when photons propagate through thick tissues.
1 code implementation • 16 Jul 2023 • Tomáš Chobola, Gesine Müller, Veit Dausmann, Anton Theileis, Jan Taucher, Jan Huisken, Tingying Peng
Our results demonstrate that LUCYD outperforms the state-of-the-art methods in both synthetic and real microscopy images, achieving superior performance in terms of image quality and generalisability.
no code implementations • NeurIPS 2023 • Manuel Tran, Yashin Dicente Cid, Amal Lahiani, Fabian J. Theis, Tingying Peng, Eldad Klaiman
We introduce LoReTTa (Linking mOdalities with a tRansitive and commutativE pre-Training sTrAtegy) to address this understudied problem.
2 code implementations • 23 Jan 2023 • Sophia J. Wagner, Daniel Reisenbüchler, Nicholas P. West, Jan Moritz Niehues, Gregory Patrick Veldhuizen, Philip Quirke, Heike I. Grabsch, Piet A. van den Brandt, Gordon G. A. Hutchins, Susan D. Richman, Tanwei Yuan, Rupert Langer, Josien Christina Anna Jenniskens, Kelly Offermans, Wolfram Mueller, Richard Gray, Stephen B. Gruber, Joel K. Greenson, Gad Rennert, Joseph D. Bonner, Daniel Schmolze, Jacqueline A. James, Maurice B. Loughrey, Manuel Salto-Tellez, Hermann Brenner, Michael Hoffmeister, Daniel Truhn, Julia A. Schnabel, Melanie Boxberg, Tingying Peng, Jakob Nikolas Kather
Methods: In this study, we developed a new fully transformer-based pipeline for end-to-end biomarker prediction from pathology slides.
no code implementations • 31 Dec 2022 • Florian Kofler, Johannes Wahle, Ivan Ezhov, Sophia Wagner, Rami Al-Maskari, Emilia Gryska, Mihail Todorov, Christina Bukas, Felix Meissen, Tingying Peng, Ali Ertürk, Daniel Rueckert, Rolf Heckemann, Jan Kirschke, Claus Zimmer, Benedikt Wiestler, Bjoern Menze, Marie Piraud
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such.
no code implementations • 4 Dec 2022 • Ziqi Yu, Xiaoyang Han, Shengjie Zhang, Jianfeng Feng, Tingying Peng, Xiao-Yong Zhang
Our results demonstrate that MouseGAN++, as a simultaneous image synthesis and segmentation method, can be used to fuse cross-modality information in an unpaired manner and yield more robust performance in the absence of multimodal data.
no code implementations • 30 Sep 2022 • Tomas Chobola, Anton Theileis, Jan Taucher, Tingying Peng
We present a model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application.
no code implementations • 27 Jun 2022 • Yu Liu, Kurt Weiss, Nassir Navab, Carsten Marr, Jan Huisken, Tingying Peng
Light-sheet fluorescence microscopy (LSFM) is a cutting-edge volumetric imaging technique that allows for three-dimensional imaging of mesoscopic samples with decoupled illumination and detection paths.
1 code implementation • 13 May 2022 • Daniel Reisenbüchler, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
Classical multiple instance learning (MIL) methods are often based on the identical and independent distributed assumption between instances, hence neglecting the potentially rich contextual information beyond individual entities.
1 code implementation • 14 Mar 2022 • Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
Evaluations of our framework on two public histopathological datasets show strong improvements in the case of sparse labels: for a H&E-stained colorectal cancer dataset, the accuracy increases by up to 9% compared to supervised cross-entropy loss; for a highly imbalanced dataset of single white blood cells from leukemia patient blood smears, the F1-score increases by up to 6%.
no code implementations • 23 Nov 2021 • Ye Liu, Sophia J. Wagner, Tingying Peng
Annotating microscopy images for nuclei segmentation is laborious and time-consuming.
1 code implementation • 26 Jul 2021 • Sophia J. Wagner, Nadieh Khalili, Raghav Sharma, Melanie Boxberg, Carsten Marr, Walter de Back, Tingying Peng
Alternatively, color augmentation can be applied during training leading to a more robust model without the extra step of color normalization at test time.
1 code implementation • 22 Jul 2020 • Ario Sadafi, Asya Makhro, Anna Bogdanova, Nassir Navab, Tingying Peng, Shadi Albarqouni, Carsten Marr
In blood cell disorders, only a subset of all cells is morphologically altered and relevant for the diagnosis.
no code implementations • 4 Jun 2017 • Gerda Bortsova, Gijs van Tulder, Florian Dubost, Tingying Peng, Nassir Navab, Aad van der Lugt, Daniel Bos, Marleen de Bruijne
In this paper, we propose a method for automatic segmentation of ICAC; the first to our knowledge.
no code implementations • 26 Aug 2016 • Gerda Bortsova, Michael Sterr, Lichao Wang, Fausto Milletari, Nassir Navab, Anika Böttcher, Heiko Lickert, Fabian Theis, Tingying Peng
A statistical analysis of these measurements requires annotation of mitosis events, which is currently a tedious and time-consuming task that has to be performed manually.
no code implementations • ICCV 2015 • Gustavo Carneiro, Tingying Peng, Christine Bayer, Nassir Navab
We introduce two new structured output models that use a latent graph, which is flexible in terms of the number of nodes and structure, where the training process minimises a high-order loss function using a weakly annotated training set.