IQR (Image-Query Retrieval Dataset)

Introduced by Xie et al. in Zero and R2D2: A Large-scale Chinese Cross-modal Benchmark and A Vision-Language Framework

IQR is proposed for the image-text retrieval task. We use 200,000 queries and the corresponding images as the annotated image-query pairs.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages