Inducing Honest Reporting Without Observing Outcomes: An Application to the Peer-Review Process

12 Sep 2013  ·  Arthur Carvalho, Stanko Dimitrov, Kate Larson ·

When eliciting opinions from a group of experts, traditional devices used to promote honest reporting assume that there is an observable future outcome. In practice, however, this assumption is not always reasonable. In this paper, we propose a scoring method built on strictly proper scoring rules to induce honest reporting without assuming observable outcomes. Our method provides scores based on pairwise comparisons between the reports made by each pair of experts in the group. For ease of exposition, we introduce our scoring method by illustrating its application to the peer-review process. In order to do so, we start by modeling the peer-review process using a Bayesian model where the uncertainty regarding the quality of the manuscript is taken into account. Thereafter, we introduce our scoring method to evaluate the reported reviews. Under the assumptions that reviewers are Bayesian decision-makers and that they cannot influence the reviews of other reviewers, we show that risk-neutral reviewers strictly maximize their expected scores by honestly disclosing their reviews. We also show how the group's scores can be used to find a consensual review. Experimental results show that encouraging honest reporting through the proposed scoring method creates more accurate reviews than the traditional peer-review process.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here