On the benefits of deep RL in accelerated MRI sampling

29 Sep 2021  ·  Thomas Sanchez, Igor Krawczuk, Volkan Cevher ·

Deep learning approaches have shown great promise in accelerating magnetic resonance imaging (MRI), by reconstructing high quality images from highly undersampled data. While previous sampling methods relied on heuristics, recent work has improved the state-of-the-art (SotA) with deep reinforcement learning (RL) sampling policies, which promise the possibility of long term planning and adapting to the observations at test time. In this work, we perform a careful reproduction and comparison of SotA RL sampling methods. We find that i) a simple, easy-to-code, greedily trained fixed policy can match or outperform deep RL methods and ii) find and resolve subtle variations in the preprocessing which previously made results incomparable across different works. Our results cast doubt on the added value of current RL approaches over fixed masks in MRI sampling and highlight the importance of leveraging strong fixed baselines, standardized reporting as well as isolating the source of improvement in a given work via ablations. We conclude with recommendations for the training and evaluation of deep reconstruction and sampling systems for adaptive MRI based on our findings.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods