Datasets > Modality > Videos > MSR-VTT

MSR-VTT (Microsoft Research Video to Text) is a large-scale dataset for the open domain video captioning, which consists of 10,000 video clips from 20 categories, and each video clip is annotated with 20 English sentences by Amazon Mechanical Turks. There are about 29,000 unique words in all captions. The standard splits uses 6,513 clips for training, 497 clips for validation, and 2,990 clips for testing.

Source: Learning to Discretely Compose Reasoning Module Networksfor Video Captioning

Samples

License

  • Unknown

Modalities

Languages

Tasks