Search Results for author: Yannis Kalantidis

Found 1 papers, 1 papers with code

Learning to Generate Grounded Visual Captions without Localization Supervision

1 code implementation ECCV 2020 Chih-Yao Ma, Yannis Kalantidis, Ghassan AlRegib, Peter Vajda, Marcus Rohrbach, Zsolt Kira

When automatically generating a sentence description for an image or video, it often remains unclear how well the generated caption is grounded, that is whether the model uses the correct image regions to output particular words, or if the model is hallucinating based on priors in the dataset and/or the language model.

Image Captioning Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.