Vision Transformers for Dense Prediction

24 Mar 2021  ·  René Ranftl, Alexey Bochkovskiy, Vladlen Koltun ·

We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder... The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at https://github.com/intel-isl/DPT. read more

PDF Abstract

Results from the Paper


 Ranked #1 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Semantic Segmentation ADE20K DPT-Hybrid Validation mIoU 49.02 # 10
Semantic Segmentation ADE20K val DPT-Hybrid mIoU 49.02 # 16
Pixel Accuracy 83.11 # 2
Monocular Depth Estimation KITTI Eigen split DPT-Hybrid absolute relative error 0.062 # 3
RMSE 2.573 # 2
RMSE log 0.092 # 2
Delta < 1.25 0.959 # 2
Delta < 1.25^2 0.995 # 1
Delta < 1.25^3 0.999 # 1
Monocular Depth Estimation NYU-Depth V2 DPT-Hybrid RMSE 0.357 # 1
Semantic Segmentation PASCAL Context DPT-Hybrid mIoU 60.46 # 2

Methods