Vision Transformers for Dense Prediction
We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at https://github.com/intel-isl/DPT.
PDF Abstract ICCV 2021 PDF ICCV 2021 AbstractCode
Datasets
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Uses Extra Training Data |
Benchmark |
---|---|---|---|---|---|---|---|
Semantic Segmentation | ADE20K | DPT-Hybrid | Validation mIoU | 49.02 | # 113 | ||
Semantic Segmentation | ADE20K val | DPT-Hybrid | mIoU | 49.02 | # 56 | ||
Pixel Accuracy | 83.11 | # 4 | |||||
Monocular Depth Estimation | KITTI Eigen split | DPT-Hybrid | absolute relative error | 0.062 | # 22 | ||
RMSE | 2.573 | # 22 | |||||
RMSE log | 0.092 | # 21 | |||||
Delta < 1.25 | 0.959 | # 21 | |||||
Delta < 1.25^2 | 0.995 | # 18 | |||||
Delta < 1.25^3 | 0.999 | # 4 | |||||
Monocular Depth Estimation | NYU-Depth V2 | DPT-Hybrid | RMSE | 0.357 | # 29 | ||
absolute relative error | 0.110 | # 33 | |||||
Delta < 1.25 | 0.904 | # 29 | |||||
Delta < 1.25^2 | 0.988 | # 17 | |||||
Delta < 1.25^3 | 0.994 | # 33 | |||||
log 10 | 0.045 | # 31 | |||||
Semantic Segmentation | PASCAL Context | DPT-Hybrid | mIoU | 60.46 | # 11 |