Communal Domain Learning for Registration in Drifted Image Spaces

20 Aug 2019  ·  Awais Mansoor, Marius George Linguraru ·

Designing a registration framework for images that do not share the same probability distribution is a major challenge in modern image analytics yet trivial task for the human visual system (HVS). Discrepancies in probability distributions, also known as \emph{drifts}, can occur due to various reasons including, but not limited to differences in sequences and modalities (e.g., MRI T1-T2 and MRI-CT registration), or acquisition settings (e.g., multisite, inter-subject, or intra-subject registrations). The popular assumption about the working of HVS is that it exploits a communal feature subspace exists between the registering images or fields-of-view that encompasses key drift-invariant features. Mimicking the approach that is potentially adopted by the HVS, herein, we present a representation learning technique of this invariant communal subspace that is shared by registering domains. The proposed communal domain learning (CDL) framework uses a set of hierarchical nonlinear transforms to learn the communal subspace that minimizes the probability differences and maximizes the amount of shared information between the registering domains. Similarity metric and parameter optimization calculations for registration are subsequently performed in the drift-minimized learned communal subspace. This generic registration framework is applied to register multisequence (MR: T1, T2) and multimodal (MR, CT) images. Results demonstrated generic applicability, consistent performance, and statistically significant improvement for both multi-sequence and multi-modal data using the proposed approach ($p$-value$<0.001$; Wilcoxon rank sum test) over baseline methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here