Debiased Contrastive Learning for Sequential Recommendation

21 Mar 2023  ·  Yuhao Yang, Chao Huang, Lianghao Xia, Chunzhen Huang, Da Luo, Kangyi Lin ·

Current sequential recommender systems are proposed to tackle the dynamic user preference learning with various neural techniques, such as Transformer and Graph Neural Networks (GNNs). However, inference from the highly sparse user behavior data may hinder the representation ability of sequential pattern encoding. To address the label shortage issue, contrastive learning (CL) methods are proposed recently to perform data augmentation in two fashions: (i) randomly corrupting the sequence data (e.g. stochastic masking, reordering); (ii) aligning representations across pre-defined contrastive views. Although effective, we argue that current CL-based methods have limitations in addressing popularity bias and disentangling of user conformity and real interest. In this paper, we propose a new Debiased Contrastive learning paradigm for Recommendation (DCRec) that unifies sequential pattern encoding with global collaborative relation modeling through adaptive conformity-aware augmentation. This solution is designed to tackle the popularity bias issue in recommendation systems. Our debiased contrastive learning framework effectively captures both the patterns of item transitions within sequences and the dependencies between users across sequences. Our experiments on various real-world datasets have demonstrated that DCRec significantly outperforms state-of-the-art baselines, indicating its efficacy for recommendation. To facilitate reproducibility of our results, we make our implementation of DCRec publicly available at: https://github.com/HKUDS/DCRec.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods