Line Segment Detection Using Transformers without Edges

CVPR 2021  ยท  Yifan Xu, Weijian Xu, David Cheung, Zhuowen Tu ยท

In this paper, we present a joint end-to-end line segment detection algorithm using Transformers that is post-processing and heuristics-guided intermediate processing (edge/junction/region detection) free. Our method, named LinE segment TRansformers (LETR), takes advantages of having integrated tokenized queries, a self-attention mechanism, and an encoding-decoding strategy within Transformers by skipping standard heuristic designs for the edge element detection and perceptual grouping processes. We equip Transformers with a multi-scale encoder/decoder strategy to perform fine-grained line segment detection under a direct endpoint distance loss. This loss term is particularly suitable for detecting geometric structures such as line segments that are not conveniently represented by the standard bounding box representations. The Transformers learn to gradually refine line segments through layers of self-attention. In our experiments, we show state-of-the-art results on Wireframe and YorkUrban benchmarks.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Task Learning wireframe dataset LETR sAP10 65.2 # 1
sAP15 67.7 # 1
FH 83.3 # 1
Line Segment Detection York Urban Dataset LETR sAP10 29.4 # 3
sAP15 31.7 # 2
FH 66.9 # 1

Methods


No methods listed for this paper. Add relevant methods here