Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task.
To these ends, we present a dual-domain self-supervised transformer (DSFormer) for accelerated MC-MRI reconstruction.
Nevertheless, visual monitoring of fetal motion based on displayed slices, and navigation at the level of stacks-of-slices is inefficient.
One of the major challenges in training such networks raises when data is unbalanced, which is common in many medical imaging applications such as lesion segmentation where lesion class voxels are often much lower in numbers than non-lesion voxels.
Our results show that in such registration applications that are amendable to learning, the proposed deep learning methods with geodesic loss minimization can achieve accurate results with a wide capture range in real-time (<100ms).
We aimed to develop a fully automatic segmentation method that independently segments sections of the fetal brain in 2D fetal MRI slices in real-time.
One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels.
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines.