Factor Graph Attention

CVPR 2019  ยท  Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander Schwing ยท

Dialog is an effective way to exchange information, but subtle details and nuances are extremely important. While significant progress has paved a path to address visual dialog with algorithms, details and nuances remain a challenge. Attention mechanisms have demonstrated compelling results to extract details in visual question answering and also provide a convincing framework for visual dialog due to their interpretability and effectiveness. However, the many data utilities that accompany visual dialog challenge existing attention techniques. We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities. To this end, we design a factor graph based attention mechanism which combines any number of utility representations. We illustrate the applicability of the proposed approach on the challenging and recently introduced VisDial datasets, outperforming recent state-of-the-art methods by 1.1% for VisDial0.9 and by 2% for VisDial1.0 on MRR. Our ensemble model improved the MRR score on VisDial1.0 by more than 6%.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Dialog VisDial v0.9 val 9xFGA (VGG) MRR 68.92 # 1
Mean Rank 3.39 # 1
R@1 55.16 # 1
R@10 92.95 # 1
R@5 86.26 # 1
Visual Dialog Visual Dialog v1.0 test-std 5xFGA (F-RCNNx101) NDCG (x 100) 57.20 # 58
MRR (x 100) 69.3 # 5
R@1 55.65 # 5
R@5 86.73 # 3
R@10 94.05 # 3
Mean 3.14 # 78

Methods