MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

22 Aug 2021  ·  Faranak Shamsafar, Samuel Woerz, Rafia Rahim, Andreas Zell ·

Recent methods in stereo matching have continuously improved the accuracy using deep models. This gain, however, is attained with a high increase in computation cost, such that the network may not fit even on a moderate GPU. This issue raises problems when the model needs to be deployed on resource-limited devices. For this, we propose two light models for stereo vision with reduced complexity and without sacrificing accuracy. Depending on the dimension of cost volume, we design a 2D and a 3D model with encoder-decoders built from 2D and 3D convolutions, respectively. To this end, we leverage 2D MobileNet blocks and extend them to 3D for stereo vision application. Besides, a new cost volume is proposed to boost the accuracy of the 2D model, making it performing close to 3D networks. Experiments show that the proposed 2D/3D networks effectively reduce the computational expense (27%/95% and 72%/38% fewer parameters/operations in 2D and 3D models, respectively) while upholding the accuracy. Our code is available at

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Stereo Depth Estimation KITTI2015 2D-MobileStereoNet three pixel error 2.67 # 3
Stereo Depth Estimation KITTI2015 3D-MobileStereoNet three pixel error 1.69 # 1
Stereo Depth Estimation sceneflow 3D-MobileStereoNet Average End-Point Error 0.80 # 1
EPE 0.80 # 1
Stereo Depth Estimation sceneflow 2D-MobileStereoNet Average End-Point Error 1.14 # 3
EPE 1.14 # 2