Deformable ConvNets v2: More Deformable, Better Results

CVPR 2019  ·  Xizhou Zhu, Han Hu, Stephen Lin, Jifeng Dai ·

The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Detection COCO minival Mask R-CNN (ResNet-101, DCNv2) box AP 43.1 # 100
Object Detection COCO minival Faster R-CNN (ResNet-101, DCNv2) box AP 41.7 # 115
APS 22.2 # 69
APM 45.8 # 44
APL 58.7 # 31
Object Detection COCO test-dev DCNv2 (ResNet-101, multi-scale) box AP 46.0 # 99
AP50 67.9 # 54
AP75 50.8 # 68
APS 27.8 # 70
APM 49.1 # 69
APL 59.5 # 64
Hardware Burden None # 1
Operations per network pass None # 1