no code implementations • 18 May 2021 • Fatma S. Abousaleh, Wen-Huang Cheng, Neng-Hao Yu, Yu Tsao
In this study, motivated by multimodal learning, which uses information from various modalities, and the current success of convolutional neural networks (CNNs) in various fields, we propose a deep learning model, called visual-social convolutional neural network (VSCNN), which predicts the popularity of a posted image by incorporating various types of visual and social features into a unified network model.