Explainable Recommendation
29 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
CAFE: Coarse-to-Fine Neural Symbolic Reasoning for Explainable Recommendation
User profiles can capture prominent user behaviors from the history, and provide valuable signals about which kinds of path patterns are more likely to lead to potential items of interest for the user.
EXTRA: Explanation Ranking Datasets for Explainable Recommendation
To achieve a standard way of evaluating recommendation explanations, we provide three benchmark datasets for EXplanaTion RAnking (denoted as EXTRA), on which explainability can be measured by ranking-oriented metrics.
Explainable Recommendation with Comparative Constraints on Product Aspects
Not only do we aim at providing comparative explanations involving such items, but we also formulate comparative constraints involving aspect-level comparisons between the target item and the reference items.
Faithfully Explainable Recommendation via Neural Logic Reasoning
Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process.
Personalized Transformer for Explainable Recommendation
Transformer, which is demonstrated with strong language modeling capability, however, is not personalized and fails to make use of the user and item IDs since the ID tokens are not even in the same semantic space as the words.
Time-aware Path Reasoning on Knowledge Graph for Recommendation
In this work, we propose a novel Time-aware Path reasoning for Recommendation (TPRec for short) method, which leverages the potential of temporal information to offer better recommendation with plausible explanations.
Personalized Prompt Learning for Explainable Recommendation
In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages.
Post Processing Recommender Systems with Knowledge Graphs for Recency, Popularity, and Diversity of Explanations
Existing explainable recommender systems have mainly modeled relationships between recommended and already experienced products, and shaped explanation types accordingly (e. g., movie "x" starred by actress "y" recommended to a user because that user watched other movies with "y" as an actress).
Learning to Rank Rationales for Explainable Recommendation
Seeing this gap, we propose a model named Semantic-Enhanced Bayesian Personalized Explanation Ranking (SE-BPER) to effectively combine the interaction information and semantic information.
Reinforced Path Reasoning for Counterfactual Explainable Recommendation
We also deploy the explanation policy to a recommendation model to enhance the recommendation.