Assisting Decision Making in Scholarly Peer Review: A Preference Learning Perspective

2 Sep 2021  ·  Nils Dycke, Edwin Simpson, Ilia Kuznetsov, Iryna Gurevych ·

Peer review is the primary means of quality control in academia; as an outcome of a peer review process, program and area chairs make acceptance decisions for each paper based on the review reports and scores they received. Quality of scientific work is multi-faceted; coupled with the subjectivity of reviewing, this makes final decision making difficult and time-consuming. To support this final step of peer review, we formalize it as a paper ranking problem. We introduce a novel, multi-faceted generic evaluation framework for ranking submissions based on peer reviews that takes into account effectiveness, efficiency and fairness. We propose a preference learning perspective on the task that considers both review texts and scores to alleviate the inevitable bias and noise in reviews. Our experiments on peer review data from the ACL 2018 conference demonstrate the superiority of our preference-learning-based approach over baselines and prior work, while highlighting the importance of using both review texts and scores to rank submissions.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods