Object Detection Models

Detection Transformer

Introduced by Carion et al. in End-to-End Object Detection with Transformers

Detr, or Detection Transformer, is a set-based object detector using a Transformer on top of a convolutional backbone. It uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A transformer decoder then takes as input a small fixed number of learned positional embeddings, which we call object queries, and additionally attends to the encoder output. We pass each output embedding of the decoder to a shared feed forward network (FFN) that predicts either a detection (class and bounding box) or a “no object” class.

Source: End-to-End Object Detection with Transformers

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Object Detection 127 25.76%
Object 65 13.18%
Decoder 51 10.34%
Semantic Segmentation 15 3.04%
Instance Segmentation 13 2.64%
Real-Time Object Detection 8 1.62%
2D Object Detection 7 1.42%
Image Classification 7 1.42%
Autonomous Driving 6 1.22%

Categories