Paper

RRNet: Repetition-Reduction Network for Energy Efficient Decoder of Depth Estimation

We introduce Repetition-Reduction network (RRNet) for resource-constrained depth estimation, offering significantly improved efficiency in terms of computation, memory and energy consumption. The proposed method is based on repetition-reduction (RR) blocks. The RR blocks consist of the set of repeated convolutions and the residual connection layer that take place of the pointwise reduction layer with linear connection to the decoder. The RRNet help reduce memory usage and power consumption in the residual connections to the decoder layers. RRNet consumes approximately 3.84 times less energy and 3.06 times less meory and is approaximately 2.21 times faster, without increasing the demand on hardware resource relative to the baseline network (Godard et al, CVPR'17), outperforming current state-of-the-art lightweight architectures such as SqueezeNet, ShuffleNet, MobileNet and PyDNet.

Results in Papers With Code
(↓ scroll down to see all results)