MemeSem:A Multi-modal Framework for Sentimental Analysis of Meme via Transfer Learning

In the age of the internet, Memes have grown to be one of the hottest subjects on the internet and arguably. But despite their huge growth, there is not much attention towards meme sentimental analysis. In this paper, we present MemeSem- a multimodal deep neural network framework for sentiment analysis of memes via transfer learning. Our proposed model utilizes VGG19 pre-trained on ImageNet dataset and BERT language model to learn the visual and textual feature of the meme and combine them together to make predictions. We have performed a comparative analysis of MemeSem model with various baseline models. For our experiment, we prepared a dataset consisting of 10,115 internet memes with three sentiment classes- (Positive, Negative and Neutral). Our proposed model outperforms the baseline multimodals and independent unimodals based on either images or text. On an average MemeSem outperform the unimodal and multimodal baseline by 10.69\% and 3.41\%.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here