Trust dynamics and user attitudes on recommendation errors: preliminary results

11 Feb 2020  ·  David A. Pelta, Jose L. Verdegay, Maria T. Lamata, Carlos Cruz Corona ·

Artificial Intelligence based systems may be used as digital nudging techniques that can steer or coerce users to make decisions not always aligned with their true interests. When such systems properly address the issues of Fairness, Accountability, Transparency, and Ethics, then the trust of the user in the system would just depend on the system's output. The aim of this paper is to propose a model for exploring how good and bad recommendations affect the overall trust in an idealized recommender system that issues recommendations over a resource with limited capacity. The impact of different users attitudes on trust dynamics is also considered. Using simulations, we ran a large set of experiments that allowed to observe that: 1) under certain circumstances, all the users ended accepting the recommendations; and 2) the user attitude (controlled by a single parameter balancing the gain/loss of trust after a good/bad recommendation) has a great impact in the trust dynamics.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods