Locating Objects Without Bounding Boxes

Recent advances in convolutional neural networks (CNN) have achieved remarkable results in locating objects in images. In these networks, the training procedure usually requires providing bounding boxes or the maximum number of expected objects. In this paper, we address the task of estimating object locations without annotated bounding boxes which are typically hand-drawn and time consuming to label. We propose a loss function that can be used in any fully convolutional network (FCN) to estimate object locations. This loss function is a modification of the average Hausdorff distance between two unordered sets of points. The proposed method has no notion of bounding boxes, region proposals, or sliding windows. We evaluate our method with three datasets designed to locate people's heads, pupil centers and plant centers. We outperform state-of-the-art generic object detectors and methods fine-tuned for pupil tracking.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Localization Mall Hausdorff Loss Precision 88.1 # 1
Object Localization Plant Hausdorff Loss F-Score 88.6 # 1
Object Localization Pupil Hausdorff Loss Recall 89.2 # 1


No methods listed for this paper. Add relevant methods here