The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.
11,187 PAPERS • 96 BENCHMARKS
The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators.
821 PAPERS • 11 BENCHMARKS
The NUS-WIDE dataset contains 269,648 images with a total of 5,018 tags collected from Flickr. These images are manually annotated with 81 concepts, including objects and scenes.
336 PAPERS • 4 BENCHMARKS
PASCAL VOC 2007 is a dataset for image recognition. The twenty object classes that have been selected are:
124 PAPERS • 14 BENCHMARKS
The CUHK-PEDES dataset is a caption-annotated pedestrian dataset. It contains 40,206 images over 13,003 persons. Images are collected from five existing person re-identification datasets, CUHK03, Market-1501, SSM, VIPER, and CUHK01 while each image is annotated with 2 text descriptions by crowd-sourcing workers. Sentences incorporate rich details about person appearances, actions, poses.
86 PAPERS • 4 BENCHMARKS
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
75 PAPERS • NO BENCHMARKS YET
Recipe1M+ is a dataset which contains one million structured cooking recipes with 13M associated images.
66 PAPERS • 3 BENCHMARKS
The Remote Sensing Image Captioning Dataset (RSICD) is a dataset for remote sensing image captioning task. It contains more than ten thousands remote sensing images which are collected from Google Earth, Baidu Map, MapABC and Tianditu. The images are fixed to 224X224 pixels with various resolutions. The total number of remote sensing images is 10921, with five sentences descriptions per image.
58 PAPERS • 3 BENCHMARKS
Dataset contains 33,010 molecule-description pairs split into 80\%/10\%/10\% train/val/test splits. The goal of the task is to retrieve the relevant molecule for a natural language description. It is defined as follows:
34 PAPERS • 4 BENCHMARKS
22 PAPERS • 1 BENCHMARK
SemArt is a multi-modal dataset for semantic art understanding. SemArt is a collection of fine-art painting images in which each image is associated to a number of attributes and a textual artistic comment, such as those that appear in art catalogues or museum collections. It contains 21,384 samples that provides artistic comments along with fine-art paintings and their attributes for studying semantic art understanding.
15 PAPERS • NO BENCHMARKS YET
ChineseFoodNet aims to automatically recognizing pictured Chinese dishes. Most of the existing food image datasets collected food images either from recipe pictures or selfie. In the dataset, images of each food category of the dataset consists of not only web recipe and menu pictures but photos taken from real dishes, recipe and menu as well. ChineseFoodNet contains over 180,000 food photos of 208 categories, with each category covering a large variations in presentations of same Chinese food.
6 PAPERS • NO BENCHMARKS YET
SoundingEarth consists of co-located aerial imagery and audio samples all around the world.
6 PAPERS • 1 BENCHMARK
Contains 8k flickr Images with captions. Visit this page to explore the data.
5 PAPERS • 2 BENCHMARKS
The Song Describer Dataset (SDD) contains ~1.1k captions for 706 permissively licensed music recordings. It is designed for use in evaluation of models that address music-and-language (M&L) tasks such as music captioning, text-to-music generation and music-language retrieval.
5 PAPERS • 1 BENCHMARK
Twitter100k is a large-scale dataset for weakly supervised cross-media retrieval.
4 PAPERS • NO BENCHMARKS YET
A dataset that allows exploration of cross-modal retrieval where images contain scene-text instances.
3 PAPERS • NO BENCHMARKS YET
CiNAT Birds 2021 (Cross-View iNaturalist-2021 Birds) dataset contains ground-level images of bird species along with satellite images associated with the geolocation of the ground-level images. In total, there are 413,959 pairs for training and 14,831 pairs for validation and testing. The ground-level images are of varying sizes while the satellite images are of size 256x256. Additionally, the dataset comes with rich metadata for each image - geolocation, date, observer id, taxonomy.
1 PAPER • NO BENCHMARKS YET
A Zero-Shot Sketch-based Inter-Modal Object Retrieval Scheme for Remote Sensing Images
The image collection of the IAPR TC-12 Benchmark consists of 20,000 still natural images taken from locations around the world and comprising an assorted cross-section of still natural images. This includes pictures of different sports and actions, photographs of people, animals, cities, landscapes, and many other aspects of contemporary life. Each image is associated with a text caption in up to three different languages (English, German and Spanish).
Music recommendation for videos attracts growing interest in multi-modal research. However, existing systems focus primarily on content compatibility, often ignoring the users’ preferences. Their inability to interact with users for further refinements or to provide explanations leads to a less satisfying experience. We address these issues with MuseChat, a first-of-its-kind dialogue-based recommendation system that personalizes music suggestions for videos. Our system consists of two key functionalities with associated modules: recommendation and reasoning. The recommendation module takes a video along with optional information including previous suggested music and user’s preference as inputs and retrieves an appropriate music matching the context. The reasoning module, equipped with the power of Large Language Model (Vicuna-7B) and extended to multi-modal inputs, is able to provide reasonable explanation for the recommended music. To evaluate the effectiveness of MuseChat, we build
PoseScript is a dataset that pairs a few thousand 3D human poses from AMASS with rich human-annotated descriptions of the body parts and their spatial relationships. This dataset is designed for the retrieval of relevant poses from large-scale datasets and synthetic pose generation, both based on a textual pose description.
A truly multimodal dataset for benchmarking deep learning models on ecological tasks.