Memory-Efficient Implementation of DenseNets

21 Jul 2017Geoff PleissDanlu ChenGao HuangTongcheng LiLaurens van der MaatenKilian Q. Weinberger

The DenseNet architecture is highly computationally efficient as a result of feature reuse. However, a naive DenseNet implementation can require a significant amount of GPU memory: If not properly managed, pre-activation batch normalization and contiguous convolution operations can produce feature maps that grow quadratically with network depth... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper