We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately... Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org read more

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Dialog VisDial v0.9 val HRE-QIH-D MRR 0.5807 # 17
Mean Rank 5.78 # 17
R@1 43.82 # 17
R@10 84.07 # 17
R@5 74.68 # 16
MRR 0.5846 # 16
Mean Rank 5.72 # 16
R@1 44.67 # 16
R@10 84.22 # 16
R@5 74.50 # 17
Visual Dialog VisDial v0.9 val MN-QIH-D MRR 0.5965 # 15
Mean Rank 5.46 # 15
R@1 45.55 # 15
R@10 85.37 # 15
R@5 76.22 # 15
Visual Dialog Visual Dialog v1.0 test-std MN-QIH-D NDCG (x 100) 45.3 # 58
MRR (x 100) 55.4 # 44
R@1 40.95 # 45
R@5 72.45 # 36
R@10 82.83 # 38
Mean 5.95 # 18
Visual Dialog Visual Dialog v1.0 test-std HRE-QIH-D NDCG (x 100) 45.5 # 57
MRR (x 100) 54.2 # 46
R@1 39.93 # 46
R@5 70.45 # 39
R@10 81.50 # 45
Mean 6.41 # 12
Visual Dialog Visual Dialog v1.0 test-std MN-QIH-D NDCG (x 100) 47.5 # 55
MRR (x 100) 55.5 # 43
R@1 40.98 # 44
R@5 72.30 # 37
R@10 83.30 # 36
Mean 5.92 # 19

Methods