Interest-Related Item Similarity Model Based on Multimodal Data for Top-N Recommendation

13 Feb 2019  ·  Junmei Lv, Bin Song, Jie Guo, Xiaojiang Du, Mohsen Guizani ·

Nowadays, the recommendation systems are applied in the fields of e-commerce, video websites, social networking sites, etc., which bring great convenience to people's daily lives. The types of the information are diversified and abundant in recommendation systems, therefore the proportion of unstructured multimodal data like text, image and video is increasing. However, due to the representation gap between different modalities, it is intractable to effectively use unstructured multimodal data to improve the efficiency of recommendation systems. In this paper, we propose an end-to-end Multimodal Interest-Related Item Similarity model (Multimodal IRIS) to provide recommendations based on multimodal data source. Specifically, the Multimodal IRIS model consists of three modules, i.e., multimodal feature learning module, the Interest-Related Network (IRN) module and item similarity recommendation module. The multimodal feature learning module adds knowledge sharing unit among different modalities. Then IRN learn the interest relevance between target item and different historical items respectively. At last, the multimodal data feature learning, IRN and item similarity recommendation modules are unified into an integrated system to achieve performance enhancements and to accommodate the addition or absence of different modal data. Extensive experiments on real-world datasets show that, by dealing with the multimodal data which people may pay more attention to when selecting items, the proposed Multimodal IRIS significantly improves accuracy and interpretability on top-N recommendation task over the state-of-the-art methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here