Temporally Resolution Decrement: Utilizing the Shape Consistency for Higher Computational Efficiency

2 Dec 2021  ·  Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu ·

Image resolution that has close relations with accuracy and computational cost plays a pivotal role in network training. In this paper, we observe that the reduced image retains relatively complete shape semantics but loses extensive texture information. Inspired by the consistency of the shape semantics as well as the fragility of the texture information, we propose a novel training strategy named Temporally Resolution Decrement. Wherein, we randomly reduce the training images to a smaller resolution in the time domain. During the alternate training with the reduced images and the original images, the unstable texture information in the images results in a weaker correlation between the texture-related patterns and the correct label, naturally enforcing the model to rely more on shape properties that are robust and conform to the human decision rule. Surprisingly, our approach greatly improves both the training and inference efficiency of convolutional neural networks. On ImageNet classification, using only 33\% calculation quantity (randomly reducing the training image to 112$\times$112 within 90\% epochs) can still improve ResNet-50 from 76.32\% to 77.71\%. Superimposed with the strong training procedure of ResNet-50 on ImageNet, our method achieves 80.42\% top-1 accuracy with saving 37.5\% calculation overhead. To the best of our knowledge this is the highest ImageNet single-crop accuracy on ResNet-50 under 224$\times$224 without extra data or distillation.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here