NITS-VC System for VATEX Video Captioning Challenge 2020

7 Jun 2020  ·  Alok Singh, Thoudam Doren Singh, Sivaji Bandyopadhyay ·

Video captioning is process of summarising the content, event and action of the video into a short textual form which can be helpful in many research areas such as video guided machine translation, video sentiment analysis and providing aid to needy individual. In this paper, a system description of the framework used for VATEX-2020 video captioning challenge is presented. We employ an encoder-decoder based approach in which the visual features of the video are encoded using 3D convolutional neural network (C3D) and in the decoding phase two Long Short Term Memory (LSTM) recurrent networks are used in which visual features and input captions are fused separately and final output is generated by performing element-wise product between the output of both LSTMs. Our model is able to achieve BLEU scores of 0.20 and 0.22 on public and private test data sets respectively.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Captioning VATEX NITS-VC BLEU-4 20.0 # 10
CIDEr 24.0 # 10
METEOR 18.0 # 7
ROUGE-L 42.0 # 8

Methods


No methods listed for this paper. Add relevant methods here