Adaptive Course Recommendation System

In the process of course learning, users incline to change their interests with the improvements of their cognition. Existing course recommendation methods usually assume that users’ preferences are static. They fail to capture the user’s dynamic interests in sequential learning behaviours. In this respect, the recommendations show low accuracy and adaptivity, especially when users have diverse interests in many different courses. Thus, they may not be suitable for applying in the online course recommendation scenario. In this paper, we propose a novel course recommendation framework, named Dynamic Attention and hierarchical Reinforcement Learning (DARL), to improve the adaptivity of the recommendation model. DARL automatically captures the user’s preferences in each interaction between a profile reviser and a recommendation model, and thereby enhances the effectiveness of course recommendation. For tracking the changes in users’ preferences, DARL adaptively updates the attention weight of the corresponding course at different sessions to improve the recommendation accuracy. We perform empirical experiments on two real-world MOOCs (i.e., Massive Open Online Courses) datasets. Experimental results demonstrate that DARL significantly outperforms state-of-the-art course recommendation methods in terms of major evaluation metrics.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here