Webly Supervised Concept Expansion for General Purpose Vision Models

General Purpose Vision (GPV) systems are models that are designed to solve a wide array of visual tasks without requiring architectural changes. Today, GPVs primarily learn both skills and concepts from large fully supervised datasets. Scaling GPVs to tens of thousands of concepts by acquiring data to learn each concept for every skill quickly becomes prohibitive. This work presents an effective and inexpensive alternative: learn skills from supervised datasets, learn concepts from web image search, and leverage a key characteristic of GPVs: the ability to transfer visual knowledge across skills. We use a dataset of 1M+ images spanning 10k+ visual concepts to demonstrate webly-supervised concept expansion for two existing GPVs (GPV-1 and VL-T5) on 3 benchmarks: 5 COCO-based datasets (80 primary concepts), a newly curated series of 5 datasets based on the OpenImages and VisualGenome repositories (~500 concepts), and the Web-derived dataset (10k+ concepts). We also propose a new architecture, GPV-2 that supports a variety of tasks -- from vision tasks like classification and localization to vision+language tasks like QA and captioning, to more niche ones like human-object interaction detection. GPV-2 benefits hugely from web data and outperforms GPV-1 and VL-T5 across these benchmarks. Our data, code, and web demo are available at https://prior.allenai.org/projects/gpv2.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) A-OKVQA GPV-2 MC Accuracy 53.7 # 5
DA VQA Score 40.7 # 7
Referring Expression Comprehension GRIT GPV-2 Refexp (ablation) 51.5 # 3
Refexp (test) 52.1 # 2
Visual Question Answering (VQA) GRIT GPV-2 VQA (ablation) 63.5 # 2
VQA (test) 63.2 # 2
Object Localization GRIT GPV-2 Localization (ablation) 53.6 # 2
Localization (test) 53.6 # 2
Object Categorization GRIT GPV-2 Categorization (ablation) 54.7 # 2
Categorization (test) 55.1 # 2

Methods