Self-Annotated Training for Controllable Image Captioning

16 Oct 2021  ·  Zhangzi Zhu, Tianlei Wang, Hong Qu ·

The Controllable Image Captioning (CIC) task aims to generate captions conditioned on designated control signals. Several structure-related control signals are proposed to control the semantic structure of sentences, such as sentence length and Part-of-Speech tag sequences. However, due to the fact that the accuracy-based reward focuses mainly on contents rather than semantic structures, existing reinforcement training methods are not applicable to structure-related CIC models. The lack of reinforcement training leads to exposure bias and the inconsistency between the optimizing function and evaluation metrics. In this paper, we propose a novel reinforcement training method for structure-related control signals: Self-Annotated Training (SAT), to improve both the accuracy and controllability of CIC models. In SAT, a recursive annotation mechanism (RAM) is designed to force the input control signal to match the actual output sentence. Moreover, we propose an extra alignment reward to finetune the CIC model trained after SAT method, which further enhances the controllability of models. On the MSCOCO benchmark, we conduct extensive experiments on different structure-related control signals and on different baseline models, the results of which demonstrate the effectiveness and generalizability of our methods.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here