MixConv: Mixed Depthwise Convolutional Kernels

22 Jul 2019  ·  Mingxing Tan, Quoc V. Le ·

Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2%), ShuffleNetV2 [16] (+3.5%), MnasNet [26] (+1.3%), ProxylessNAS [2] (+2.2%), and FBNet [27] (+2.0%). In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Code is at https://github.com/ tensorflow/tpu/tree/master/models/official/mnasnet/mixnet

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification ImageNet MixNet-L Top 1 Accuracy 78.9% # 787
Number of params 7.3M # 467
GFLOPs 0.565 # 61
Image Classification ImageNet MixNet-S Top 1 Accuracy 75.8% # 918
Number of params 4.1M # 392
GFLOPs 0.256 # 21
Image Classification ImageNet MixNet-M Top 1 Accuracy 77% # 875
Number of params 5.0M # 413
GFLOPs 0.360 # 37

Methods