Pix2seq: A Language Modeling Framework for Object Detection

We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.

PDF Abstract ICLR 2022 PDF ICLR 2022 Abstract

Results from the Paper


Ranked #77 on Object Detection on COCO minival (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Object Detection COCO minival Pix2seq (ViT-L) box AP 50.0 # 77
Object Detection COCO minival Pix2seq (R50) box AP 42.6 # 137
Object Detection COCO minival Pix2seq (ViT-B) box AP 47.1 # 92
Object Detection COCO minival Pix2seq (R50-C4) box AP 47.3 # 91
Object Detection COCO minival Pix2seq (R101-DC5) box AP 45.0 # 107
AP50 63.2 # 51
AP75 48.6 # 36
APS 28.2 # 26
APM 48.9 # 22
APL 60.4 # 27
Object Detection COCO minival Pix2seq (R50-DC5 ) box AP 43.2 # 129
AP50 61.0 # 71
AP75 46.1 # 54
APS 26.6 # 36
APM 47 # 37
APL 58.6 # 39

Methods


No methods listed for this paper. Add relevant methods here