Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks

We introduce Florence-2, a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-language tasks. While existing large vision models excel in transfer learning, they struggle to perform a diversity of tasks with simple instructions, a capability that implies handling the complexity of various spatial hierarchy and semantic granularity. Florence-2 was designed to take text-prompt as task instructions and generate desirable results in text forms, whether it be captioning, object detection, grounding or segmentation. This multi-task learning setup demands large-scale, high-quality annotated data. To this end, we co-developed FLD-5B that consists of 5.4 billion comprehensive visual annotations on 126 million images, using an iterative strategy of automated image annotation and model refinement. We adopted a sequence-to-sequence structure to train Florence-2 to perform versatile and comprehensive vision tasks. Extensive evaluations on numerous tasks demonstrated Florence-2 to be a strong vision foundation model contender with unprecedented zero-shot and fine-tuning capabilities.

PDF Abstract CVPR 2024 PDF CVPR 2024 Abstract

Results from the Paper


 Ranked #1 on Visual Grounding on RefCOCO+ val (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Visual Grounding RefCOCO+ testA Florence-2-large-ft Accuracy (%) 95.3 # 1
Visual Grounding RefCOCO+ test B Florence-2-large-ft Accuracy (%) 92.0 # 1
Visual Grounding RefCOCO+ val Florence-2-large-ft Accuracy (%) 93.4 # 1

Methods


No methods listed for this paper. Add relevant methods here