Improved Model based Deep Learning using Monotone Operator Learning (MOL)

22 Nov 2021  ·  Aniket Pramanik, Mathews Jacob ·

Model-based deep learning (MoDL) algorithms that rely on unrolling are emerging as powerful tools for image recovery. In this work, we introduce a novel monotone operator learning framework to overcome some of the challenges associated with current unrolled frameworks, including high memory cost, lack of guarantees on robustness to perturbations, and low interpretability. Unlike current unrolled architectures that use finite number of iterations, we use the deep equilibrium (DEQ) framework to iterate the algorithm to convergence and to evaluate the gradient of the convolutional neural network blocks using Jacobian iterations. This approach significantly reduces the memory demand, facilitating the extension of MoDL algorithms to high dimensional problems. We constrain the CNN to be a monotone operator, which allows us to introduce algorithms with guaranteed convergence properties and robustness guarantees. We demonstrate the utility of the proposed scheme in the context of parallel MRI.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here