Paper

Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound

In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrade when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here we propose a deep learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or sub-sampled radio-frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high quality ultrasound images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.

Results in Papers With Code
(↓ scroll down to see all results)