Micro-Batch Training with Batch-Channel Normalization and Weight Standardization

25 Mar 2019  ·  Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille ·

Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for training, which is inevitable for many computer vision tasks, e.g., object detection and semantic segmentation, constrained by memory consumption. To address this issue, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring two success factors of BN into micro-batch training: 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients; BCN combines batch and channel normalizations and leverages estimated statistics of the activations in convolutional layers to keep networks away from elimination singularities. We validate WS and BCN on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation. All experimental results consistently show that WS and BCN improve micro-batch training significantly. Moreover, using WS and BCN with micro-batch training is even able to match or outperform the performances of BN with large-batch training.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Instance Segmentation COCO minival Mask R-CNN-FPN (ResNeXt-101, GN+WS) mask AP 38.34 # 61
AP50 61.07 # 11
AP75 40.82 # 13
APL 56.08 # 4
APM 41.73 # 6
APS 18.32 # 10
Object Detection COCO minival Mask R-CNN-FPN (ResNeXt-101, GN+WS) box AP 43.12 # 118
AP50 64.15 # 42
AP75 47.11 # 45
APS 25.49 # 48
APM 47.19 # 35
APL 56.39 # 52