Unsupervised Multi-Modality Registration Network based on Spatially Encoded Gradient Information

16 May 2021  ·  Wangbin Ding, Lei LI, Xiahai Zhuang, Liqin Huang ·

Multi-modality medical images can provide relevant or complementary information for a target (organ, tumor or tissue). Registering multi-modality images to a common space can fuse these comprehensive information, and bring convenience for clinical application. Recently, neural networks have been widely investigated to boost registration methods. However, it is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training. In this work, we propose a multi-modality registration network (MMRegNet), which can perform registration between multi-modality images. Meanwhile, we present spatially encoded gradient information to train MMRegNet in an unsupervised manner. The proposed network was evaluated on MM-WHS 2017. Results show that MMRegNet can achieve promising performance for left ventricle cardiac registration tasks. Meanwhile, to demonstrate the versatility of MMRegNet, we further evaluate the method with a liver dataset from CHAOS 2019. Source code will be released publicly\footnote{https://github.com/NanYoMy/mmregnet} once the manuscript is accepted.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here