Displaced Sensor Automotive Radar Imaging

6 Oct 2020  ·  Guohua Wang, Kumar Vijay Mishra ·

Displaced automotive sensor imaging exploits joint processing of the data acquired from multiple radar units, each of which may have limited individual resources, to enhance the localization accuracy. Prior works either consider perfect synchronization among the sensors, employ single antenna radars, entail high processing cost, or lack performance analyses. Contrary to these works, we develop a displaced multiple-input multiple-output (MIMO) frequency-modulated continuous-wave (FMCW) radar signal model under coarse synchronization with only frame-level alignment. We derive Bayesian performance bounds for the common automotive radar processing modes such as point-cloud-based fusion as well as raw-signal-based non-coherent and coherent imaging. For the non-coherent mode, which offers a compromise between low computational load and improved localization, we exploit the block sparsity of range profiles for signal reconstruction to avoid direct computational imaging with massive data. For the high-resolution coherent imaging, we develop a method that automatically estimates the synchronization error and performs displaced radar imaging by exploiting sparsity-driven recovery models. Our extensive numerical experiments demonstrate these advantages. Our proposed non-coherent processing of displaced MIMO FMCW radars improves position estimation by an order over the conventional point-cloud fusion.

PDF Abstract
No code implementations yet. Submit your code now



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here