9 papers with code • 3 benchmarks • 4 datasets
A Benchmark for the: Robustness of Object Detection Models to Image Corruptions and Distortions
To allow fair comparison of robustness enhancing methods all models have to use a standard ResNet50 backbone because performance strongly scales with backbone capacity. If requested an unrestricted category can be added later.
Benchmark Homepage: https://github.com/bethgelab/robust-detection-benchmark
mPC [AP]: Mean Performance under Corruption [measured in AP]
rPC [%]: Relative Performance under Corruption [measured in %]
Test sets: Coco: val 2017; Pascal VOC: test 2007; Cityscapes: val;
( Image credit: Benchmarking Robustness in Object Detection )
The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.
Ranked #1 on Robust Object Detection on COCO
Object detection from images captured by Unmanned Aerial Vehicles (UAVs) is becoming increasingly useful.
BIRANet yields 72. 3/75. 3% average AP/AR on the NuScenes dataset, which is better than the performance of our base network Faster-RCNN with Feature pyramid network(FFPN).
Ranked #1 on Object Detection on nuScenes
Our approach is motivated by logistics, where this assumption is valid and refined planes can be used to perform robust object detection without the need for supervised learning.
To adapt to the domain shift, the model is trained on the target domain using a set of noisy object bounding boxes that are obtained by a detection model trained only in the source domain.