1 code implementation • 8 Jan 2024 • Mike D'Arcy, Tom Hope, Larry Birnbaum, Doug Downey
We study the ability of LLMs to generate feedback for scientific papers and develop MARG, a feedback generation approach using multiple LLM instances that engage in internal discussion.
1 code implementation • 21 Jun 2023 • Mike D'Arcy, Alexis Ross, Erin Bransom, Bailey Kuehl, Jonathan Bragg, Tom Hope, Doug Downey
We introduce the task of automatically revising scientific papers based on peer feedback and release ARIES, a dataset of review comments and their corresponding paper edits.
2 code implementations • 23 Nov 2022 • Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
In response, we introduce SciRepEval, the first comprehensive benchmark for training and evaluating scientific document representations.
1 code implementation • 11 Jul 2022 • Jon Saad-Falcon, Amanpreet Singh, Luca Soldaini, Mike D'Arcy, Arman Cohan, Doug Downey
Real-world applications of neural language models often involve running many different models over the same corpus.
no code implementations • 29 Sep 2021 • Mike D'Arcy, Doug Downey
Active Learning (AL) has the potential to reduce labeling cost when training natural language processing models, but its effectiveness with the large pretrained transformer language models that power today's NLP is uncertain.
2 code implementations • 8 Apr 2019 • Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, Doug Downey
To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.
Ranked #1 on Common Sense Reasoning on CODAH (using extra training data)
no code implementations • 9 Mar 2018 • Mahmoud Hamandi, Mike D'Arcy, Pooyan Fazli
We present a novel human-aware navigation approach, where the robot learns to mimic humans to navigate safely in crowds.