A Visually-Grounded Parallel Corpus with Phrase-to-Region Linking

LREC 2020 Hideki NakayamaAkihiro TamuraTakashi Ninomiya

Visually-grounded natural language processing has become an important research direction in the past few years. However, majorities of the available cross-modal resources (e.g., image-caption datasets) are built in English and cannot be directly utilized in multilingual or non-English scenarios... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.