Multimodal Topic Learning for Video Recommendation

26 Oct 2020  ·  Shi Pu, Yijiang He, Zheng Li, Mao Zheng ·

Facilitated by deep neural networks, video recommendation systems have made significant advances. Existing video recommendation systems directly exploit features from different modalities (e.g., user personal data, user behavior data, video titles, video tags, and visual contents) to input deep neural networks, while expecting the networks to online mine user-preferred topics implicitly from these features. However, the features lacking semantic topic information limits accurate recommendation generation. In addition, feature crosses using visual content features generate high dimensionality features that heavily downgrade the online computational efficiency of networks. In this paper, we explicitly separate topic generation from recommendation generation, propose a multimodal topic learning algorithm to exploit three modalities (i.e., tags, titles, and cover images) for generating video topics offline. The topics generated by the proposed algorithm serve as semantic topic features to facilitate preference scope determination and recommendation generation. Furthermore, we use the semantic topic features instead of visual content features to effectively reduce online computational cost. Our proposed algorithm has been deployed in the Kuaibao information streaming platform. Online and offline evaluation results show that our proposed algorithm performs favorably.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here