Visual Sensor Pose Optimisation Using Visibility Models for Smart Cities

9 Jun 2021  ·  Eduardo Arnold, Sajjad Mozaffari, Mehrdad Dianati, Paul Jennings ·

Visual sensor networks are used for monitoring traffic in large cities and are promised to support automated driving in complex road segments. The pose of these sensors, i.e. position and orientation, directly determines the coverage of the driving environment, and the ability to detect and track objects navigating therein. Existing sensor pose optimisation methods either maximise the coverage of ground surfaces, or consider the visibility of target objects (e.g. cars) as binary variables, which fails to represent their degree of visibility. For example, such formulations fail in cluttered environments where multiple objects occlude each other. This paper proposes two novel sensor pose optimisation methods, one based on gradient-ascent and one using integer programming techniques, which maximise the visibility of multiple target objects. Both methods are based on a rendering engine that provides pixel-level visibility information about the target objects, and thus, can cope with occlusions in cluttered environments. The methods are evaluated in a complex driving environment and show improved visibility of target objects when compared to existing methods. Such methods can be used to guide the cost effective deployment of sensor networks in smart cities to improve the safety and efficiency of traffic monitoring systems.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here