A Plausibility-based Fault Detection Method for High-level Fusion Perception Systems

30 Sep 2020  ·  Florian Geissler, Alex Unnervik, Michael Paulitsch ·

Trustworthy environment perception is the fundamental basis for the safe deployment of automated agents such as self-driving vehicles or intelligent robots. The problem remains that such trust is notoriously difficult to guarantee in the presence of systematic faults, e.g. non-traceable errors caused by machine learning functions. One way to tackle this issue without making rather specific assumptions about the perception process is plausibility checking. Similar to the reasoning of human intuition, the final outcome of a complex black-box procedure is verified against given expectations of an object's behavior. In this article, we apply and evaluate collaborative, sensor-generic plausibility checking as a mean to detect empirical perception faults from their statistical fingerprints. Our real use case is next-generation automated driving that uses a roadside sensor infrastructure for perception augmentation, represented here by test scenarios at a German highway and a city intersection. The plausibilization analysis is integrated naturally in the object fusion process, and helps to diagnose known and possibly yet unknown faults in distributed sensing systems.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here