A Theoretical and Practical Framework for Evaluating Uncertainty Calibration in Object Detection

1 Sep 2023  ·  Pedro Conde, Rui L. Lopes, Cristiano Premebida ·

The proliferation of Deep Neural Networks has resulted in machine learning systems becoming increasingly more present in various real-world applications. Consequently, there is a growing demand for highly reliable models in many domains, making the problem of uncertainty calibration pivotal when considering the future of deep learning. This is especially true when considering object detection systems, that are commonly present in safety-critical applications such as autonomous driving, robotics and medical diagnosis. For this reason, this work presents a novel theoretical and practical framework to evaluate object detection systems in the context of uncertainty calibration. This encompasses a new comprehensive formulation of this concept through distinct formal definitions, and also three novel evaluation metrics derived from such theoretical foundation. The robustness of the proposed uncertainty calibration metrics is shown through a series of representative experiments.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here