The Square Root Agreement Rule for Incentivizing Truthful Feedback on Online Platforms

25 Jul 2015  ·  Vijay Kamble, Nihar Shah, David Marn, Abhay Parekh, Kannan Ramachandran ·

A major challenge in obtaining evaluations of products or services on e-commerce platforms is eliciting informative responses in the absence of verifiability. This paper proposes the Square Root Agreement Rule (SRA): a simple reward mechanism that incentivizes truthful responses to objective evaluations on such platforms. In this mechanism, an agent gets a reward for an evaluation only if her answer matches that of her peer, where this reward is inversely proportional to a popularity index of the answer. This index is defined to be the square root of the empirical frequency at which any two agents performing the same evaluation agree on the particular answer across evaluations of similar entities operating on the platform. Rarely agreed-upon answers thus earn a higher reward than answers for which agreements are relatively more common. We show that in the many tasks regime, the truthful equilibrium under SRA is strictly payoff-dominant across large classes of natural equilibria that could arise in these settings, thus increasing the likelihood of its adoption. While there exist other mechanisms achieving such guarantees, they either impose additional assumptions on the response distribution that are not generally satisfied for objective evaluations or they incentivize truthful behavior only if each agent performs a prohibitively large number of evaluations and commits to using the same strategy for each evaluation. SRA is the first known incentive mechanism satisfying such guarantees without imposing any such requirements. Moreover, our empirical findings demonstrate the robustness of the incentive properties of SRA in the presence of mild subjectivity or observational biases in the responses. These properties make SRA uniquely attractive for administering reward-based incentive schemes (e.g., rebates, discounts, reputation scores, etc.) on online platforms.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here