DiCENet: Dimension-wise Convolutions for Efficient Networks

8 Jun 2019  ·  Sachin Mehta, Hannaneh Hajishirzi, Mohammad Rastegari ·

We introduce a novel and generic convolutional unit, DiCE unit, that is built using dimension-wise convolutions and dimension-wise fusion. The dimension-wise convolutions apply light-weight convolutional filtering across each dimension of the input tensor while dimension-wise fusion efficiently combines these dimension-wise representations; allowing the DiCE unit to efficiently encode spatial and channel-wise information contained in the input tensor. The DiCE unit is simple and can be seamlessly integrated with any architecture to improve its efficiency and performance. Compared to depth-wise separable convolutions, the DiCE unit shows significant improvements across different architectures. When DiCE units are stacked to build the DiCENet model, we observe significant improvements over state-of-the-art models across various computer vision tasks including image classification, object detection, and semantic segmentation. On the ImageNet dataset, the DiCENet delivers 2-4% higher accuracy than state-of-the-art manually designed models (e.g., MobileNetv2 and ShuffleNetv2). Also, DiCENet generalizes better to tasks (e.g., object detection) that are often used in resource-constrained devices in comparison to state-of-the-art separable convolution-based efficient networks, including neural search-based methods (e.g., MobileNetv3 and MixNet. Our source code in PyTorch is open-source and is available at https://github.com/sacmehta/EdgeNets/

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation Cityscapes val DiCENet mIoU 63.4 # 80
Image Classification ImageNet DiCENet Top 1 Accuracy 75.1% # 880
GFLOPs 0.553 # 56
Semantic Segmentation PASCAL VOC 2012 test DiCENet Mean IoU 67.31% # 43
Semantic Segmentation PASCAL VOC 2012 val DiCENet mIoU 66.5% # 21

Methods