Generating Diverse and Accurate Visual Captions by Comparative Adversarial Learning

3 Apr 2018 Dianqi Li Qiuyuan Huang Xiaodong He Lei Zhang Ming-Ting Sun

We study how to generate captions that are not only accurate in describing an image but also discriminative across different images. The problem is both fundamental and interesting, as most machine-generated captions, despite phenomenal research progresses in the past several years, are expressed in a very monotonic and featureless format... (read more)

PDF Abstract



Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet