no code implementations • 3 Nov 2022 • An Zeng, Chunbiao Wu, Meiping Huang, Jian Zhuang, Shanshan Bi, Dan Pan, Najeeb Ullah, Kaleem Nawaz Khan, Tianchen Wang, Yiyu Shi, Xiaomeng Li, Guisen Lin, Xiaowei Xu
In this paper, we propose a large-scale dataset for coronary artery segmentation on CTA images.
Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc.
At the data acquisition time, the operator could not know the quality of the segmentation results.
Results show that the baseline method can achieve comparable results with existing works on aorta and TL segmentation.
Experiment results on our clinical MCE data set demonstrate that the neural network trained with the proposed loss function outperforms those existing ones that try to obtain a unique ground truth from multiple annotations, both quantitatively and qualitatively.
The success of deep learning heavily depends on the availability of large labeled training sets.
For PFO diagnosis, contrast transthoracic echocardiography (cTTE) is preferred as being a more robust method compared with others.
Deep neural networks (DNNs) have demonstrated their great potential in recent years, exceeding the per-formance of human experts in a wide range of applications.
no code implementations • 25 Apr 2021 • Xiaowe Xu, Jiawei Zhang, Jinglan Liu, Yukun Ding, Tianchen Wang, Hailong Qiu, Haiyun Yuan, Jian Zhuang, Wen Xie, Yuhao Dong, Qianjun Jia, Meiping Huang, Yiyu Shi
As one of the most commonly ordered imaging tests, computed tomography (CT) scan comes with inevitable radiation exposure that increases the cancer risk to patients.
To demonstrate this, we further present a baseline framework for the automatic classification of CHD, based on a state-of-the-art CHD segmentation method.
Coronary artery disease (CAD) is the most common cause of death globally, and its diagnosis is usually based on manual myocardial segmentation of Magnetic Resonance Imaging (MRI) sequences.
Deep learning had already demonstrated its power in medical images, including denoising, classification, segmentation, etc.
Experimental results on ACDC MICCAI 2017 dataset demonstrate that our hardware-aware multi-scale NAS framework can reduce the latency by up to 3. 5 times and satisfy the real-time constraints, while still achieving competitive segmentation accuracy, compared with the state-of-the-art NAS segmentation framework.
Real-time cine magnetic resonance imaging (MRI) plays an increasingly important role in various cardiac interventions.
For example, if the noise is large leading to significant difference between domain $X$ and domain $Y$, can we bridge $X$ and $Y$ with an intermediate domain $Z$ such that both the denoising process between $X$ and $Z$ and that between $Z$ and $Y$ are easier to learn?
Existing selective segmentation methods, however, ignore this unique property of selective segmentation and train their DNN models by optimizing accuracy on the entire dataset.
Cardiac magnetic resonance imaging (MRI) is an essential tool for MRI-guided surgery and real-time intervention.
3D printing has been widely adopted for clinical decision making and interventional planning of Congenital heart disease (CHD), while whole heart and great vessel segmentation is the most significant but time-consuming step in the model generation for 3D printing.
In this paper we will use deep learning based medical image segmentation as a vehicle and demonstrate that interestingly, machine and human view the compression quality differently.