Adversarial Collaborative Auto-encoder for Top-N Recommendation

16 Aug 2018  ·  Feng Yuan, Lina Yao, Boualem Benatallah ·

During the past decade, model-based recommendation methods have evolved from latent factor models to neural network-based models. Most of these techniques mainly focus on improving the overall performance, such as the root mean square error for rating predictions and hit ratio for top-N recommendation, where the users' feedback is considered as the ground-truth. However, in real-world applications, the users' feedback is possibly contaminated by imperfect user behaviours, namely, careless preference selection. Such data contamination poses challenges on the design of robust recommendation methods. In this work, to address the above issue, we propose a general adversial training framework for neural network-based recommendation models, which improves both the model robustness and the overall performance. We point out the tradeoffs between performance and robustness enhancement with detailed instructions on how to strike a balance. Specifically, we implement our approach on the collaborative auto-encoder, followed by experiments on three public available datasets: MovieLens-1M, Ciao, and FilmTrust. We show that our approach outperforms highly competitive state-of-the-art recommendation methods. In addition, we carry out a thorough analysis on the noise impacts, as well as the complex interactions between model nonlinearity and noise levels. Through simple modifications, our adversarial training framework can be applied to a host of neural network-based models whose robustness and performance are expected to be both enhanced.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here