OpenViDial 2.0: A Larger-Scale, Open-Domain Dialogue Generation Dataset with Visual Contexts

27 Sep 2021  ยท  Shuhe Wang, Yuxian Meng, Xiaoya Li, Xiaofei Sun, Rongbin Ouyang, Jiwei Li ยท

In order to better simulate the real human conversation process, models need to generate dialogue utterances based on not only preceding textual contexts but also visual contexts. However, with the development of multi-modal dialogue learning, the dataset scale gradually becomes a bottleneck. In this report, we release OpenViDial 2.0, a larger-scale open-domain multi-modal dialogue dataset compared to the previous version OpenViDial 1.0. OpenViDial 2.0 contains a total number of 5.6 million dialogue turns extracted from either movies or TV series from different resources, and each dialogue turn is paired with its corresponding visual context. We hope this large-scale dataset can help facilitate future researches on open-domain multi-modal dialog generation, e.g., multi-modal pretraining for dialogue generation.

PDF Abstract

Datasets


Introduced in the Paper:

OpenViDial 2.0

Used in the Paper:

VisDial
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-modal Dialogue Generation OpenViDial 2.0 NV (w/o MI) BLEU 1.95 # 4
Dis-1 0.0037 # 4
Dis-2 0.0302 # 4
Dis-3 0.0929 # 4
Dis-4 0.1711 # 3
Multi-modal Dialogue Generation OpenViDial 2.0 NV (w/ MI) BLEU 1.96 # 3
Dis-1 0.0039 # 3
Dis-2 0.0311 # 3
Dis-3 0.0953 # 3
Dis-4 0.163 # 4
Multi-modal Dialogue Generation OpenViDial 2.0 CV (w/o MI) BLEU 1.97 # 2
Dis-1 0.0041 # 2
Dis-2 0.0353 # 2
Dis-3 0.0999 # 2
Dis-4 0.1726 # 2
Multi-modal Dialogue Generation OpenViDial 2.0 FV (w/o MI) BLEU 1.99 # 1
Dis-1 0.0056 # 1
Dis-2 0.0431 # 1
Dis-3 0.125 # 1
Dis-4 0.2215 # 1

Methods


No methods listed for this paper. Add relevant methods here