The 2021 Image Similarity Dataset and Challenge

This paper introduces a new benchmark for large-scale image similarity detection. This benchmark is used for the Image Similarity Challenge at NeurIPS'21 (ISC2021). The goal is to determine whether a query image is a modified copy of any image in a reference corpus of size 1~million. The benchmark features a variety of image transformations such as automated transformations, hand-crafted image edits and machine-learning based manipulations. This mimics real-life cases appearing in social media, for example for integrity-related problems dealing with misinformation and objectionable content. The strength of the image manipulations, and therefore the difficulty of the benchmark, is calibrated according to the performance of a set of baseline approaches. Both the query and reference set contain a majority of "distractor" images that do not match, which corresponds to a real-life needle-in-haystack setting, and the evaluation metric reflects that. We expect the DISC21 benchmark to promote image copy detection as an important and challenging computer vision task and refresh the state of the art. Code and data are available at https://github.com/facebookresearch/isc2021

PDF Abstract

Datasets


Introduced in the Paper:

DISC21
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Similarity Detection DISC21 dev HOW+ASMK w/o normalization 17.32 # 1
with normalization 37.15 # 1
Time (ms) 150 # 3
hardware Tesla P-100 # 1
Image Similarity Detection DISC21 dev GIST PCA 256 dimension 256 # 3
w/o normalization 15.56 # 3
hardware CPU, 2.2 GHz, 40 threads # 1
Image Similarity Detection DISC21 dev GIST 960 dim dimension 960 # 2
w/o normalization 14.42 # 4
Time (ms) 0.55 # 1
hardware CPU, 2.2 GHz, 40 threads # 1
Image Similarity Detection DISC21 dev Multigrain 1500 dim dimension 1500 # 1
w/o normalization 16.47 # 2
with normalization 36.42 # 2
Time (ms) 23 # 2
hardware Tesla V100 # 1

Methods


No methods listed for this paper. Add relevant methods here