To develop a deep learning-based segmentation model for a new image dataset (e. g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal ad hoc adaptation or augmentation approaches.
In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images.
We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest.
The paper adapts the large deformation diffeomorphic metric mapping framework for image registration to the indirect setting where a template is registered against a target that is given through indirect noisy observations.
With the "Autograd Image Registration Laboratory" (AirLab), we introduce an open laboratory for image registration tasks, where the analytic gradients of the objective function are computed automatically and the device where the computations are performed, on a CPU or a GPU, is transparent.
A deep encoder-decoder network is used as the prediction model.
The individual course of white matter fiber tracts is an important key for analysis of white matter characteristics in healthy and diseased brains.
Ideally, the transformation that registers one image to another should be a diffeomorphism that is both invertible and smooth.
We found that FAIM is able to maintain both the advantages of higher accuracy and fewer "folding" locations over VoxelMorph, over a range of hyper-parameters (with the same values used for both networks).
The goal is to learn a complex function that maps the appearance of input image pairs to parameters of a spatial transformation in order to align corresponding anatomical structures.