Segmentation is All You Need

30 Apr 2019  ·  Zehua Cheng, Yuxiang Wu, Zhenghua Xu, Thomas Lukasiewicz, Weiyang Wang ·

Region proposal mechanisms are essential for existing deep learning approaches to object detection in images. Although they can generally achieve a good detection performance under normal circumstances, their recall in a scene with extreme cases is unacceptably low. This is mainly because bounding box annotations contain much environment noise information, and non-maximum suppression (NMS) is required to select target boxes. Therefore, in this paper, we propose the first anchor-free and NMS-free object detection model called weakly supervised multimodal annotation segmentation (WSMA-Seg), which utilizes segmentation models to achieve an accurate and robust object detection without NMS. In WSMA-Seg, multimodal annotations are proposed to achieve an instance-aware segmentation using weakly supervised bounding boxes; we also develop a run-data-based following algorithm to trace contours of objects. In addition, we propose a multi-scale pooling segmentation (MSP-Seg) as the underlying segmentation model of WSMA-Seg to achieve a more accurate segmentation and to enhance the detection accuracy of WSMA-Seg. Experimental results on multiple datasets show that the proposed WSMA-Seg approach outperforms the state-of-the-art detectors.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Detection COCO test-dev WSMA-Seg box mAP 38.1 # 211
Hardware Burden None # 1
Operations per network pass None # 1
Head Detection Rebar Head WSMA-Seg (stack=2 ,base=40, depth=5) F1 98.83% # 1
Face Detection WIDER Face (Hard) WSMA-Seg AP 0.8723 # 16
Face Detection WIDER Face (Medium) WSMA-Seg AP 0.9341 # 18


No methods listed for this paper. Add relevant methods here