MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation

10 Nov 2022  ·  Jiazhan Feng, Qingfeng Sun, Can Xu, Pu Zhao, Yaming Yang, Chongyang Tao, Dongyan Zhao, QIngwei Lin ·

Responding with multi-modal content has been recognized as an essential capability for an intelligent conversational agent. In this paper, we introduce the MMDialog dataset to better facilitate multi-modal conversation. MMDialog is composed of a curated set of 1.08 million real-world dialogues with 1.53 million unique images across 4,184 topics. MMDialog has two main and unique advantages. First, it is the largest multi-modal conversation dataset by the number of dialogues by 88x. Second, it contains massive topics to generalize the open-domain. To build engaging dialogue system with this dataset, we propose and normalize two response producing tasks based on retrieval and generative scenarios. In addition, we build two baselines for above tasks with state-of-the-art techniques and report their experimental performance. We also propose a novel evaluation metric MM-Relevance to measure the multi-modal responses. Our dataset and scripts are available in https://github.com/victorsungo/MMDialog.

PDF Abstract

Datasets


Introduced in the Paper:

MMDialog

Used in the Paper:

Image-Chat PhotoChat OpenViDial MMChat
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Intent Recognition MMDialog Divter F1 75.5 # 2
Multimodal Intent Recognition MMDialog DE++ F1 59.0 # 3

Methods


No methods listed for this paper. Add relevant methods here