Paper

Enhancing Recommender Systems: A Strategy to Mitigate False Negative Impact

In implicit collaborative filtering (CF) task of recommender systems, recent works mainly focus on model structure design with promising techniques like graph neural networks (GNNs). Effective and efficient negative sampling methods that suit these models, however, remain underdeveloped. One challenge is that existing hard negative samplers tend to suffer from severer over-fitting in model training. In this work, we first study the reason behind the over-fitting, and illustrate it with the incorrect selection of false negative instances with the support of experiments. In addition, we empirically observe a counter-intuitive phenomenon, that is, polluting hard negative samples' embeddings with a quite large proportional of positive samples' embeddings will lead to remarkable performance gains for prediction accuracy. On top of this finding, we present a novel negative sampling strategy, i.e., positive-dominated negative synthesizing (PDNS). Moreover, we provide theoretical analysis and derive a simple equivalent algorithm of PDNS, where only a soft factor is added in the loss function. Comprehensive experiments on three real-world datasets demonstrate the superiority of our proposed method in terms of both effectiveness and robustness.

Results in Papers With Code
(↓ scroll down to see all results)