no code implementations • 24 Apr 2024 • Philip Müller, Georgios Kaissis, Daniel Rueckert
Report generation models offer fine-grained textual interpretations of medical images like chest X-rays, yet they often lack interactivity (i. e. the ability to steer the generation process through user queries) and localized interpretability (i. e. visually grounding their predictions), which we deem essential for future adoption in clinical practice.
no code implementations • 12 Mar 2024 • Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Daniel Rueckert, Georgios Kaissis, Alexander Ziller
We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors and assess its implications on privacy leakage under DP-SGD.
no code implementations • 11 Mar 2024 • Alexander H. Berger, Laurin Lux, Suprosanna Shit, Ivan Ezhov, Georgios Kaissis, Martin J. Menten, Daniel Rueckert, Johannes C. Paetzold
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
no code implementations • 20 Feb 2024 • Alexander Ziller, Anneliese Riess, Kristian Schwethelm, Tamara T. Mueller, Daniel Rueckert, Georgios Kaissis
When training ML models with differential privacy (DP), formal upper bounds on the success of such reconstruction attacks can be provided.
1 code implementation • 19 Feb 2024 • Philip Müller, Felix Meissen, Georgios Kaissis, Daniel Rueckert
Weakly supervised object detection (WSup-OD) increases the usefulness and interpretability of image classification algorithms without requiring additional supervision.
no code implementations • 6 Dec 2023 • Felix Meissen, Johannes Getzner, Alexander Ziller, Georgios Kaissis, Daniel Rueckert
Additionally, we show that the prototypical in-distribution samples identified by our proposed methods translate well to different models and other datasets and that using their characteristics as guidance allows for successful manual selection of small subsets of high-performing samples.
no code implementations • 5 Dec 2023 • Alexander Ziller, Tamara T. Mueller, Simon Stieger, Leonhard Feiner, Johannes Brandt, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
Although a lower budget decreases the risk of information leakage, it typically also reduces the performance of such models.
no code implementations • 6 Nov 2023 • Dmitrii Usynin, Moritz Knolle, Georgios Kaissis
In this work we unify a broad range of previous definitions and perspectives on memorisation in ML, discuss their interplay with model generalisation and their implications of these phenomena on data privacy.
1 code implementation • 28 Sep 2023 • Leonhard F. Feiner, Martin J. Menten, Kerstin Hammernik, Paul Hager, Wenqi Huang, Daniel Rueckert, Rickmer F. Braren, Georgios Kaissis
In this paper, we propose a method to propagate uncertainty through cascades of deep learning models in medical imaging pipelines.
no code implementations • 25 Sep 2023 • Felix Meissen, Svenja Breuer, Moritz Knolle, Alena Buyx, Ruth Müller, Georgios Kaissis, Benedikt Wiestler, Daniel Rückert
The empirical fairness laws discovered in our study make disparate performance in UAD models easier to estimate and aid in determining the most desirable dataset composition.
no code implementations • 6 Sep 2023 • Vasiliki Sideri-Lampretsa, Veronika A. Zimmer, Huaqi Qiu, Georgios Kaissis, Daniel Rueckert
The success of multi-modal image registration, whether it is conventional or learning based, is predicated upon the choice of an appropriate distance (or similarity) measure.
1 code implementation • 5 Sep 2023 • Philip Müller, Felix Meissen, Johannes Brandt, Georgios Kaissis, Daniel Rueckert
Pathology detection and delineation enables the automatic interpretation of medical scans such as chest X-rays while providing a high level of explainability to support radiologists in making informed decisions.
no code implementations • 23 Aug 2023 • Moritz Knolle, Robert Dorfman, Alexander Ziller, Daniel Rueckert, Georgios Kaissis
Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets.
no code implementations • 11 Aug 2023 • Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mohammed Ammar, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
no code implementations • 13 Jul 2023 • Tamara T. Mueller, Sophie Starck, Leonhard F. Feiner, Kyriaki-Margarita Bintsi, Daniel Rueckert, Georgios Kaissis
In this work, we introduce extended graph assessment metrics (GAMs) for regression tasks and continuous adjacency matrices.
1 code implementation • 13 Jul 2023 • Alexander Ziller, Ayhan Can Erdur, Marwa Trigui, Alp Güvenir, Tamara T. Mueller, Philip Müller, Friederike Jungmann, Johannes Brandt, Jan Peeken, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
Training Artificial Intelligence (AI) models on 3D images presents unique challenges compared to the 2D case: Firstly, the demand for computational resources is significantly higher, and secondly, the availability of large datasets for pre-training is often limited, impeding training success.
no code implementations • 13 Jul 2023 • Tamara T. Mueller, Maulik Chevli, Ameya Daigavane, Daniel Rueckert, Georgios Kaissis
Our findings highlight the potential and the challenges of this specific DP application area.
no code implementations • 13 Jul 2023 • Tamara T. Mueller, Siyu Zhou, Sophie Starck, Friederike Jungmann, Alexander Ziller, Orhun Aksoy, Danylo Movchan, Rickmer Braren, Georgios Kaissis, Daniel Rueckert
Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases.
no code implementations • 8 Jul 2023 • Georgios Kaissis, Jamie Hayes, Alexander Ziller, Daniel Rueckert
We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.
1 code implementation • 10 Jun 2023 • Soroosh Tayebi Arasteh, Mahshad Lotfinia, Teresa Nolte, Marwin Saehn, Peter Isfort, Christiane Kuhl, Sven Nebelung, Georgios Kaissis, Daniel Truhn
We specifically investigate the performance of models trained with DP as compared to models trained without DP on data from institutions that the model had not seen during its training (i. e., external validation) - the situation that is reflective of the clinical use of AI models.
no code implementations • 4 May 2023 • Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
Obtaining high-quality data for collaborative training of machine learning models can be a challenging task due to A) regulatory concerns and B) a lack of data owner incentives to participate.
1 code implementation • CVPR 2023 • Tim Tanida, Philip Müller, Georgios Kaissis, Daniel Rueckert
While previous methods generate reports without the possibility of human intervention and with limited explainability, our method opens up novel clinical use cases through additional interactive capabilities and introduces a high degree of transparency and explainability.
Ranked #1 on Medical Report Generation on MIMIC-CXR
1 code implementation • 3 Mar 2023 • Felix Meissen, Philip Müller, Georgios Kaissis, Daniel Rueckert
To tackle this problem, we propose Robust Detection Outcome (RoDeO); a novel metric for evaluating algorithms for pathology detection in medical images, especially in chest X-rays.
1 code implementation • 1 Mar 2023 • Ioannis Lagogiannis, Felix Meissen, Georgios Kaissis, Daniel Rueckert
Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets.
2 code implementations • 3 Feb 2023 • Soroosh Tayebi Arasteh, Alexander Ziller, Christiane Kuhl, Marcus Makowski, Sven Nebelung, Rickmer Braren, Daniel Rueckert, Daniel Truhn, Georgios Kaissis
In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
1 code implementation • 30 Jan 2023 • Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis
We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy.
no code implementations • 18 Nov 2022 • Tamara T. Mueller, Stefan Kolek, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Daniel Rueckert, Georgios Kaissis
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database.
no code implementations • 14 Nov 2022 • Philip Müller, Georgios Kaissis, Daniel Rueckert
Image-text contrastive learning has proven effective for pretraining medical image models.
no code implementations • 8 Nov 2022 • Alexander Ziller, Ayhan Can Erdur, Friederike Jungmann, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
The prediction of pancreatic ductal adenocarcinoma therapy response is a clinically challenging and important task in this high-mortality tumour entity.
no code implementations • 24 Oct 2022 • Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra, Daniel Rueckert
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries, but such adversaries are rarely encountered in practice.
no code implementations • 11 Oct 2022 • Reihaneh Torkzadehmahani, Reza Nasirigerdeh, Daniel Rueckert, Georgios Kaissis
Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.
no code implementations • 30 Sep 2022 • Reza Nasirigerdeh, Javad Torkzadehmahani, Daniel Rueckert, Georgios Kaissis
They, on the other hand, considerably enhance the performance of shallow models in DP-FL and deeper models in FL and DP.
no code implementations • 9 Sep 2022 • Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis
Machine learning with formal privacy-preserving techniques like Differential Privacy (DP) allows one to derive valuable insights from sensitive medical imaging data while promising to protect patient privacy, but it usually comes at a sharp privacy-utility trade-off.
1 code implementation • 23 Aug 2022 • Felix Meissen, Johannes Paetzold, Georgios Kaissis, Daniel Rueckert
Most commonly, the anomaly detection model generates a "normal" version of an input image, and the pixel-wise $l^p$-difference of the two is used to localize anomalies.
1 code implementation • 20 May 2022 • Reza Nasirigerdeh, Reihaneh Torkzadehmahani, Daniel Rueckert, Georgios Kaissis
Existing convolutional neural network architectures frequently rely upon batch normalization (BatchNorm) to effectively train the model.
1 code implementation • 9 May 2022 • Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios Kaissis
The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.
Ranked #2 on Image Classification on Imagenette
no code implementations • 5 May 2022 • Dmitrii Usynin, Helena Klause, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis
In federated learning for medical image analysis, the safety of the learning protocol is paramount.
1 code implementation • 19 Mar 2022 • Suprosanna Shit, Rajat Koner, Bastian Wittmann, Johannes Paetzold, Ivan Ezhov, Hongwei Li, Jiazhen Pan, Sahand Sharifzadeh, Georgios Kaissis, Volker Tresp, Bjoern Menze
We leverage direct set-based object prediction and incorporate the interaction among the objects to learn an object-relation representation jointly.
no code implementations • 17 Mar 2022 • Tamara T. Mueller, Dmitrii Usynin, Johannes C. Paetzold, Daniel Rueckert, Georgios Kaissis
In this work, we study the applications of differential privacy (DP) in the context of graph-structured data.
no code implementations • 1 Mar 2022 • Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, Georgios Kaissis
The training of neural networks with Differentially Private Stochastic Gradient Descent offers formal Differential Privacy guarantees but introduces accuracy trade-offs.
no code implementations • 1 Mar 2022 • Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
Collaborative machine learning settings like federated learning can be susceptible to adversarial interference and attacks.
no code implementations • 9 Feb 2022 • Vasiliki Sideri-Lampretsa, Georgios Kaissis, Daniel Rueckert
Diffeomorphic deformable multi-modal image registration is a challenging task which aims to bring images acquired by different modalities to the same coordinate space and at the same time to preserve the topology and the invertibility of the transformation.
1 code implementation • 8 Feb 2022 • Felix Meissen, Benedikt Wiestler, Georgios Kaissis, Daniel Rueckert
Many current state-of-the-art methods for anomaly localization in medical images rely on calculating a residual image between a potentially anomalous input image and its "healthy" reconstruction.
1 code implementation • 5 Feb 2022 • Tamara T. Mueller, Johannes C. Paetzold, Chinmay Prabhakar, Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
In this work, we introduce differential privacy for graph-level classification, one of the key applications of machine learning on graphs.
1 code implementation • 24 Jan 2022 • Felix Meissen, Georgios Kaissis, Daniel Rueckert
In medical imaging, un-, semi-, or self-supervised pathology detection is often approached with anomaly- or out-of-distribution detection methods, whose inductive biases are not intentionally directed towards detecting pathologies, and are therefore sub-optimal for this task.
no code implementations • 21 Dec 2021 • Dmitrii Usynin, Alexander Ziller, Daniel Rueckert, Jonathan Passerat-Palmbach, Georgios Kaissis
The utilisation of large and diverse datasets for machine learning (ML) at scale is required to promote scientific insight into many meaningful problems.
1 code implementation • 6 Dec 2021 • Philip Müller, Georgios Kaissis, Congyu Zou, Daniel Rueckert
Contrastive learning has proven effective for pre-training image models on unlabeled data with promising results for tasks such as medical image classification.
no code implementations • 7 Oct 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kerstin Hammernik, Daniel Rueckert, Georgios Kaissis
We present $\zeta$-DP, an extension of differential privacy (DP) to complex-valued functions.
no code implementations • 22 Sep 2021 • Dmitrii Usynin, Alexander Ziller, Moritz Knolle, Andrew Trask, Kritika Prakash, Daniel Rueckert, Georgios Kaissis
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML).
no code implementations • 22 Sep 2021 • Georgios Kaissis, Moritz Knolle, Friederike Jungmann, Alexander Ziller, Dmitrii Usynin, Daniel Rueckert
$\psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
1 code implementation • 22 Sep 2021 • Tamara T. Mueller, Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Friederike Jungmann, Daniel Rueckert, Georgios Kaissis
However, while techniques such as individual R\'enyi DP (RDP) allow for granular, per-person privacy accounting, few works have investigated the impact of each input feature on the individual's privacy loss.
1 code implementation • 13 Sep 2021 • Felix Meissen, Georgios Kaissis, Daniel Rueckert
In this work, we tackle the problem of Semi-Supervised Anomaly Segmentation (SAS) in Magnetic Resonance Images (MRI) of the brain, which is the task of automatically identifying pathologies in brain images.
1 code implementation • 30 Aug 2021 • Johannes C. Paetzold, Julian McGinnis, Suprosanna Shit, Ivan Ezhov, Paul Büschl, Chinmay Prabhakar, Mihail I. Todorov, Anjany Sekuboyina, Georgios Kaissis, Ali Ertürk, Stephan Günnemann, Bjoern H. Menze
Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset.
no code implementations • 30 Jul 2021 • Moritz Knolle, Dmitrii Usynin, Alexander Ziller, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual.
no code implementations • 9 Jul 2021 • Moritz Knolle, Alexander Ziller, Dmitrii Usynin, Rickmer Braren, Marcus R. Makowski, Daniel Rueckert, Georgios Kaissis
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
no code implementations • 9 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Moritz Knolle, Kritika Prakash, Andrew Trask, Rickmer Braren, Marcus Makowski, Daniel Rueckert, Georgios Kaissis
Reconciling large-scale ML with the closed-form reasoning required for the principled analysis of individual privacy loss requires the introduction of new tools for automatic sensitivity analysis and for tracking an individual's data and their features through the flow of computation.
1 code implementation • 6 Jul 2021 • Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle, Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
The application of PTs to FL in medical imaging and the trade-offs between privacy guarantees and model utility, the ramifications on training performance and the susceptibility of the final models to attacks have not yet been conclusively investigated.
1 code implementation • 5 Jul 2021 • Benjamin Hou, Georgios Kaissis, Ronald Summers, Bernhard Kainz
Chest radiographs are one of the most common diagnostic modalities in clinical routine.
2 code implementations • 21 May 2021 • Reza Nasirigerdeh, Reihaneh Torkzadehmahani, Julian Matschinske, Jan Baumbach, Daniel Rueckert, Georgios Kaissis
Federated learning (FL) enables multiple clients to jointly train a global model under the coordination of a central server.
1 code implementation • 14 Jan 2021 • Teddy Koker, FatemehSadat Mireshghallah, Tom Titcombe, Georgios Kaissis
Deep Neural Networks (DNNs) are widely used for decision making in a myriad of critical applications, ranging from medical to societal and even judicial.
no code implementations • 10 Dec 2020 • Alexander Ziller, Jonathan Passerat-Palmbach, Théo Ryffel, Dmitrii Usynin, Andrew Trask, Ionésio Da Lima Costa Junior, Jason Mancuso, Marcus Makowski, Daniel Rueckert, Rickmer Braren, Georgios Kaissis
The utilisation of artificial intelligence in medicine and healthcare has led to successful clinical applications in several domains.
1 code implementation • 2 Sep 2020 • Moritz Knolle, Georgios Kaissis, Friederike Jungmann, Sebastian Ziegelmayer, Daniel Sasse, Marcus Makowski, Daniel Rueckert, Rickmer Braren
For artificial intelligence-based image analysis methods to reach clinical applicability, the development of high-performance algorithms is crucial.
6 code implementations • 13 Jan 2019 • Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Lohöfer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipková, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian Hülsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, Míriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Giró-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, Jürgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul Jäger, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018.
1 code implementation • 20 Feb 2017 • Patrick Ferdinand Christ, Florian Ettlinger, Georgios Kaissis, Sebastian Schlecht, Freba Ahmaddy, Felix Grün, Alexander Valentinitsch, Seyed-Ahmad Ahmadi, Rickmer Braren, Bjoern Menze
A 3D neural network (SurvivalNet) then predicts the HCC lesions' malignancy from the HCC tumor segmentation.
4 code implementations • 20 Feb 2017 • Patrick Ferdinand Christ, Florian Ettlinger, Felix Grün, Mohamed Ezzeldin A. Elshaera, Jana Lipkova, Sebastian Schlecht, Freba Ahmaddy, Sunil Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler, Felix Hofmann, Melvin D Anastasi, Seyed-Ahmad Ahmadi, Georgios Kaissis, Julian Holch, Wieland Sommer, Rickmer Braren, Volker Heinemann, Bjoern Menze
In the first step, we train a FCN to segment the liver as ROI input for a second FCN.
Automatic Liver And Tumor Segmentation Lesion Segmentation +2