Offline Metrics for Evaluating Explanation Goals in Recommender Systems

22 Oct 2023  ·  André Levi Zanon, Marcelo Garcia Manzato, Leonardo Rocha ·

Explanations are crucial for improving users' transparency, persuasiveness, engagement, and trust in Recommender Systems (RSs). However, evaluating the effectiveness of explanation algorithms regarding those goals remains challenging due to existing offline metrics' limitations. This paper introduces new metrics for the evaluation and validation of explanation algorithms based on the items and properties used to form the sentence of an explanation. Towards validating the metrics, the results of three state-of-the-art post-hoc explanation algorithms were evaluated for six RSs, comparing the offline metrics results with those of an online user study. The findings show the proposed offline metrics can effectively measure the performance of explanation algorithms and highlight a trade-off between the goals of transparency and trust, which are related to popular properties, and the goals of engagement and persuasiveness, which are associated with the diversification of properties displayed to users. Furthermore, the study contributes to the development of more robust evaluation methods for explanation algorithms in RSs.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here