Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
Large-scale datasets play a vital role in computer vision. But current datasets are annotated blindly without differentiation to samples, making the data collection inefficient and unscalable. The open question is how to build a mega-scale dataset actively. Although advanced active learning algorithms might be the answer, we experimentally found that they are lame in the realistic annotation scenario where out-of-distribution data is extensive. This work thus proposes a novel active learning framework for realistic dataset annotation. Equipped with this framework, we build a high-quality vision dataset -- Bamboo, which consists of 69M image classification annotations with 119K categories and 28M object bounding box annotations with 809 categories. We organize these categories by a hierarchical taxonomy integrated from several knowledge bases. The classification annotations are four times larger than ImageNet22K, and that of detection is three times larger than Object365. Compared to ImageNet22K and Objects365, models pre-trained on Bamboo achieve superior performance among various downstream tasks (6.2% gains on classification and 2.1% gains on detection). We believe our active learning framework and Bamboo are essential for future work.
PDF AbstractDatasets
Introduced in the Paper:
BambooUsed in the Paper:
CIFAR-10 ImageNet MS COCO CIFAR-100 Oxford 102 Flower Places ImageNet-1K DTD Stanford Cars Food-101 Caltech-101 iNaturalist ObjectNet Objects365 CityPersons JFT-300M Oxford-IIIT Pet Dataset CompCars Oxford-IIIT Pets SUN397 OmniBenchmark FoodX-251Results from the Paper
Ranked #1 on Image Classification on Food-101 (using extra training data)