Multimedia recommendation

15 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Multi-Modal Self-Supervised Learning for Recommendation

hkuds/mmssl 21 Feb 2023

The online emergence of multi-modal sharing platforms (eg, TikTok, Youtube) is powering personalized recommender systems to incorporate various modalities (eg, visual, textual and acoustic) into the latent user representations.

Multi-View Graph Convolutional Network for Multimedia Recommendation

enoche/mmrec 7 Aug 2023

Meanwhile, a behavior-aware fuser is designed to comprehensively model user preferences by adaptively learning the relative importance of different modality features.

MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video

weiyinwei/mmgcn ACM International Conference on Multimedia 2019

Existing works on multimedia recommendation largely exploit multi-modal contents to enrich item representations, while less effort is made to leverage information interchange between users and items to enhance user representations and further capture user's fine-grained preferences on different modalities.

ContentWise Impressions: An Industrial Dataset with Impressions Included

ContentWise/contentwise-impressions 3 Aug 2020

In this article, we introduce the ContentWise Impressions dataset, a collection of implicit interactions and impressions of movies and TV series from an Over-The-Top media service, which delivers its media contents over the Internet.

Mining Latent Structures for Multimedia Recommendation

CRIPAC-DIG/LATTICE 19 Apr 2021

To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.

Latent Structure Mining with Contrastive Modality Fusion for Multimedia Recommendation

cripac-dig/micro 1 Nov 2021

Although having access to multiple modalities might allow us to capture rich information, we argue that the simple coarse-grained fusion by linear combination or concatenation in previous work is insufficient to fully understand content information and item relationships. To this end, we propose a latent structure MIning with ContRastive mOdality fusion method (MICRO for brevity).

GRCN: Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

weiyinwei/grcn 3 Nov 2021

Reorganizing implicit feedback of users as a user-item interaction graph facilitates the applications of graph convolutional networks (GCNs) in recommendation tasks.

DualGNN: Dual Graph Neural Network for Multimedia Recommendation

wqf321/dualgnn IEEE Transactions on Multimedia (TMM) 2021

Specifically, we first introduce a single-modal representation learning module, which performs graph operations on the user-microvideo graph in each modality to capture single-modal user preferences on different modalities.

Self-Supervised Learning for Multimedia Recommendation

zltao/slmrec IEEE Transactions on Multimedia (TMM) 2022

To capture multi-modal patterns in the data itself, we go beyond the supervised learning paradigm, and incorporate the idea of self-supervised learning (SSL) into multimedia recommendation.

LightGT: A Light Graph Transformer for Multimedia Recommendation

Liuwq-bit/LightGT SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023

Considering its challenges in effectiveness and efficiency, we propose a novel Transformer-based recommendation model, termed as Light Graph Transformer model (LightGT).