Search Results for author: Ivan Stelmakh

Found 10 papers, 3 papers with code

A Gold Standard Dataset for the Reviewer Assignment Problem

2 code implementations23 Mar 2023 Ivan Stelmakh, John Wieting, Graham Neubig, Nihar B. Shah

We address this challenge by collecting a novel dataset of similarity scores that we release to the research community.

How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?

no code implementations22 Nov 2022 Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah

In a top-tier computer science conference (NeurIPS 2021) with more than 23, 000 submitting authors and 9, 000 submitted papers, we survey the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews.

ASQA: Factoid Questions Meet Long-Form Answers

no code implementations12 Apr 2022 Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, Ming-Wei Chang

In contrast to existing long-form QA tasks (such as ELI5), ASQA admits a clear notion of correctness: a user faced with a good summary should be able to answer different interpretations of the original ambiguous question.

Question Answering

CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription

1 code implementation2 Jul 2021 Nikita Pavlichenko, Ivan Stelmakh, Dmitry Ustalov

The main obstacle towards designing aggregation methods for more advanced applications is the absence of training data, and in this work, we focus on bridging this gap in speech recognition.

Crowdsourced Text Aggregation speech-recognition

Debiasing Evaluations That are Biased by Evaluations

1 code implementation1 Dec 2020 Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar B. Shah

For example, universities ask students to rate the teaching quality of their instructors, and conference organizers ask authors of submissions to evaluate the quality of the reviews.

A Large Scale Randomized Controlled Trial on Herding in Peer-Review Discussions

no code implementations30 Nov 2020 Ivan Stelmakh, Charvi Rastogi, Nihar B. Shah, Aarti Singh, Hal Daumé III

Peer review is the backbone of academia and humans constitute a cornerstone of this process, being responsible for reviewing papers and making the final acceptance/rejection decisions.

Decision Making

A Novice-Reviewer Experiment to Address Scarcity of Qualified Reviewers in Large Conferences

no code implementations30 Nov 2020 Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III

Conference peer review constitutes a human-computation process whose importance cannot be overstated: not only it identifies the best submissions for acceptance, but, ultimately, it impacts the future of the whole research area by promoting some ideas and restraining others.

Prior and Prejudice: The Novice Reviewers' Bias against Resubmissions in Conference Peer Review

no code implementations30 Nov 2020 Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III

Modern machine learning and computer science conferences are experiencing a surge in the number of submissions that challenges the quality of peer review as the number of competent reviewers is growing at a much slower rate.

BIG-bench Machine Learning

Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment

no code implementations8 Oct 2020 Ivan Stelmakh, Nihar B. Shah, Aarti Singh

We consider the issue of strategic behaviour in various peer-assessment tasks, including peer grading of exams or homeworks and peer review in hiring or promotions.

PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review

no code implementations16 Jun 2018 Ivan Stelmakh, Nihar B. Shah, Aarti Singh

Our fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.