Combining information from multi-view images is crucial to improve the performance and robustness of automated methods for disease diagnosis.
Multi-modality images have been widely used and provide comprehensive information for medical image analysis.
Cardiac tagging magnetic resonance imaging (t-MRI) is the gold standard for regional myocardium deformation and cardiac strain estimation.
As deep learning technologies advance, increasingly more data is necessary to generate general and robust models for various tasks.
Our proposed method tackles the challenge of training GAN in the federated learning manner: How to update the generator with a flow of temporary discriminators?
To alleviate such tedious and manual effort, in this paper we propose a novel weakly supervised segmentation framework based on partial points annotation, i. e., only a small portion of nuclei locations in each image are labeled.
However, effective and efficient delineation of all the knee articular cartilages in large-sized and high-resolution 3D MR knee data is still an open challenge.
Motivated by the Gestalt pattern theory, and the Winograd Challenge for language understanding, we design synthetic experiments to investigate a deep learning algorithm's ability to infer simple (at least for human) visual concepts, such as symmetry, from examples.