Attention based video captioning framework for Hindi

In recent times, active research is going on for bridging the gap between computer vision and natural language. In this paper, we attempt to address the problem of Hindi video captioning. In a linguistically diverse country like India, it is important to provide a means which can help in understanding the visual entities in native languages. In this work, we employ a hybrid attention mechanism by extending the soft temporal attention mechanism with a semantic attention to make the system able to decide when to focus on visual context vector and semantic input. The visual context vector of the input video is extracted using 3D convolutional neural network (3D CNN) and a Long Short Term Memory (LSTM) recurrent network with attention module is used for decoding the encoded context vector. We experimented on a dataset built in-house for Hindi video captioning by translating MSR-VTT dataset followed by post-editing. Our system achieve 0.369 CIDEr score and 0.393 METEOR score and outperformed other baseline models including RMN (Reasoning Module Networks) based model.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video Captioning Hindi MSR-VTT V+S-Att-based BLEU4 36.2 # 2

Methods