Arbitrariness of peer review: A Bayesian analysis of the NIPS experiment

23 Jul 2015  ·  Olivier Francois ·

The principle of peer review is central to the evaluation of research, by ensuring that only high-quality items are funded or published. But peer review has also received criticism, as the selection of reviewers may introduce biases in the system. In 2014, the organizers of the ``Neural Information Processing Systems\rq\rq{} conference conducted an experiment in which $10\%$ of submitted manuscripts (166 items) went through the review process twice. Arbitrariness was measured as the conditional probability for an accepted submission to get rejected if examined by the second committee. This number was equal to $60\%$, for a total acceptance rate equal to $22.5\%$. Here we present a Bayesian analysis of those two numbers, by introducing a hidden parameter which measures the probability that a submission meets basic quality criteria. The standard quality criteria usually include novelty, clarity, reproducibility, correctness and no form of misconduct, and are met by a large proportions of submitted items. The Bayesian estimate for the hidden parameter was equal to $56\%$ ($95\%$CI: $ I = (0.34, 0.83)$), and had a clear interpretation. The result suggested the total acceptance rate should be increased in order to decrease arbitrariness estimates in future review processes.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here