Human-vehicle Cooperative Visual Perception for Autonomous Driving under Complex Road and Traffic Scenarios

17 Dec 2021  ·  Yiyue Zhao, Cailin Lei, Yu Shen, Yuchuan Du, Qijun Chen ·

Human-vehicle cooperative driving has become the critical technology of autonomous driving, which reduces the workload of human drivers. However, the complex and uncertain road environments bring great challenges to the visual perception of cooperative systems. And the perception characteristics of autonomous driving differ from manual driving a lot. To enhance the visual perception capability of human-vehicle cooperative driving, this paper proposed a cooperative visual perception model. 506 images of complex road and traffic scenarios were collected as the data source. Then this paper improved the object detection algorithm of autonomous vehicles. The mean perception accuracy of traffic elements reached 75.52%. By the image fusion method, the gaze points of human drivers were fused with vehicles' monitoring screens. Results revealed that cooperative visual perception could reflect the riskiest zone and predict the trajectory of conflict objects more precisely. The findings can be applied in improving the visual perception algorithms and providing accurate data for planning and control.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here