no code implementations • NAACL (sdp) 2021 • Iz Beltagy, Arman Cohan, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Keith Hall, Drahomira Herrmannova, Petr Knoth, Kyle Lo, Philipp Mayr, Robert Patton, Michal Shmueli-Scheuer, Anita de Waard, Kuansan Wang, Lucy Wang
With the ever-increasing pace of research and high volume of scholarly communication, scholars face a daunting task.
no code implementations • EMNLP (sdp) 2020 • Sajad Sotudeh Gharebagh, Arman Cohan, Nazli Goharian
A two stage model that additionally includes an abstraction step using BART; and 3.
no code implementations • ACL 2022 • Iz Beltagy, Arman Cohan, Robert Logan IV, Sewon Min, Sameer Singh
The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult.
no code implementations • sdp (COLING) 2022 • Arman Cohan, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Drahomira Herrmannova, Petr Knoth, Kyle Lo, Philipp Mayr, Michal Shmueli-Scheuer, Anita de Waard, Lucy Lu Wang
With the ever-increasing pace of research and high volume of scholarly communication, scholars face a daunting task.
1 code implementation • sdp (COLING) 2022 • Arman Cohan, Guy Feigenblat, Tirthankar Ghosal, Michal Shmueli-Scheuer
We present the main findings of MuP 2022 shared task, the first shared task on multi-perspective scientific document summarization.
1 code implementation • 21 Jan 2025 • Yilun Zhao, Lujing Xie, Haowei Zhang, Guo Gan, Yitao Long, Zhiyuan Hu, Tongyan Hu, Weiyuan Chen, Chuhan Li, Junyang Song, Zhijian Xu, Chengye Wang, Weifeng Pan, Ziyao Shangguan, Xiangru Tang, Zhenwen Liang, Yixin Liu, Chen Zhao, Arman Cohan
We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark for evaluating foundation models in video understanding.
1 code implementation • 11 Jan 2025 • Xiangru Tang, Tianyu Hu, Muyang Ye, Yanjun Shao, Xunjian Yin, Siru Ouyang, Wangchunshu Zhou, Pan Lu, Zhuosheng Zhang, Yilun Zhao, Arman Cohan, Mark Gerstein
To address these challenges, we present ChemAgent, a novel framework designed to improve the performance of LLMs through a dynamic, self-updating library.
no code implementations • 31 Dec 2024 • Mingqi Gao, Yixin Liu, Xinyu Hu, Xiaojun Wan, Jonathan Bragg, Arman Cohan
Due to the high cost and time-consuming nature of human evaluations, an automatic LLM bencher (i. e., an automatic evaluation framework that aims to rank LLMs based on their alignment with human preferences) is indispensable.
1 code implementation • 30 Dec 2024 • Zhaojian Yu, Yilun Zhao, Arman Cohan, Xiao-Ping Zhang
First, we propose a general recipe for generating more challenging versions of existing benchmarks, resulting in three new benchmarks: HumanEval Pro, MBPP Pro, and BigCodeBench-Lite Pro, specifically designed to assess LLMs on self-invoking code generation.
1 code implementation • 23 Nov 2024 • Haochen Zhao, Xiangru Tang, Ziran Yang, Xiao Han, Xuanzhi Feng, Yueqing Fan, Senhao Cheng, Di Jin, Yilun Zhao, Arman Cohan, Mark Gerstein
To address this issue in the field of chemistry, we introduce ChemSafetyBench, a benchmark designed to evaluate the accuracy and safety of LLM responses.
1 code implementation • 8 Nov 2024 • Yilun Zhao, Yitao Long, Yuru Jiang, Chengye Wang, Weiyuan Chen, Hongjun Liu, Yiming Zhang, Xiangru Tang, Chen Zhao, Arman Cohan
We introduce FinDVer, a comprehensive benchmark specifically designed to evaluate the explainable claim verification capabilities of LLMs in the context of understanding and analyzing long, hybrid-content financial documents.
1 code implementation • 8 Nov 2024 • Shruti Singh, Nandan Sarkar, Arman Cohan
We introduce SciDQA, a new dataset for reading comprehension that challenges LLMs for a deep understanding of scientific articles, consisting of 2, 937 QA pairs.
1 code implementation • 7 Nov 2024 • Yicheng Gao, Gonghan Xu, Zhe Wang, Arman Cohan
Recent advances in large language models (LLMs) show the potential of using LLMs as evaluators for assessing the quality of text generations from LLMs.
no code implementations • 6 Nov 2024 • Chuhan Li, Ziyao Shangguan, Yilun Zhao, Deyuan Li, Yixin Liu, Arman Cohan
Existing benchmarks for evaluating foundation models mainly focus on single-document, text-only tasks.
1 code implementation • 30 Oct 2024 • Gabrielle Kaili-May Liu, Bowen Shi, Avi Caciularu, Idan Szpektor, Arman Cohan
Multi-document (MD) processing is crucial for LLMs to handle real-world tasks such as summarization and question-answering across large sets of documents.
1 code implementation • 30 Oct 2024 • Yixin Liu, Argyris Oikonomou, Weiqiang Zheng, Yang Cai, Arman Cohan
To achieve robust alignment with general preferences, we model the alignment problem as a two-player zero-sum game, where the Nash equilibrium policy guarantees a 50% win rate against any competing policy.
1 code implementation • 30 Oct 2024 • Ziyao Shangguan, Chuhan Li, Yuxuan Ding, Yanan Zheng, Yilun Zhao, Tesca Fitzgerald, Arman Cohan
Our study of existing benchmarks shows that this capability of MFMs is likely overestimated as many questions can be solved by using a single, few, or out-of-order frames.
no code implementations • 11 Oct 2024 • Simeng Han, Aaron Yu, Rui Shen, Zhenting Qi, Martin Riddell, Wenfei Zhou, Yujie Qiao, Yilun Zhao, Semih Yavuz, Ye Liu, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Dragomir Radev, Rex Ying, Arman Cohan
We show that human-written reasoning chains significantly boost the logical reasoning capabilities of LLMs via many-shot prompting and fine-tuning.
1 code implementation • 9 Oct 2024 • Yixin Liu, Kejian Shi, Alexander R. Fabbri, Yilun Zhao, Peifeng Wang, Chien-Sheng Wu, Shafiq Joty, Arman Cohan
The automatic evaluation of instruction following typically involves using large language models (LLMs) to assess response quality.
no code implementations • 28 Sep 2024 • Xuyuan Xiong, Simeng Han, Ziyue Zhou, Arman Cohan
Large Language Models (LLMs) are commonly used to generate solutions for mathematical reasoning problems in the following formats: natural language, code, or a combination of both.
no code implementations • 4 Sep 2024 • Hyunji Lee, Luca Soldaini, Arman Cohan, Minjoon Seo, Kyle Lo
To our knowledge, RouterRetriever is the first work to demonstrate the advantages of using multiple domain-specific expert embedding models with effective routing over a single, general-purpose embedding model in retrieval tasks.
1 code implementation • 18 Jul 2024 • Yixin Liu, PengFei Liu, Arman Cohan
In this work, we explore an under-investigated aspect of DPO - its dependency on the reference model or policy.
1 code implementation • 20 Jun 2024 • Chunyuan Deng, Yilun Zhao, Yuzhao Heng, Yitong Li, Jiannan Cao, Xiangru Tang, Arman Cohan
This survey serves as a succinct overview of the most recent advancements in data contamination research, providing a straightforward guide for the benefit of future research endeavors.
1 code implementation • 20 Jun 2024 • Xiangru Tang, Xingyao Zhang, Yanjun Shao, Jie Wu, Yilun Zhao, Arman Cohan, Ming Gong, Dongmei Zhang, Mark Gerstein
To conduct the experiments, we construct a Personalized Scientific Writing (PSW) dataset to study multi-user personalization.
1 code implementation • 10 Jun 2024 • David Wadden, Kejian Shi, Jacob Morrison, Aakanksha Naik, Shruti Singh, Nitzan Barzilay, Kyle Lo, Tom Hope, Luca Soldaini, Shannon Zejiang Shen, Doug Downey, Hannaneh Hajishirzi, Arman Cohan
We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following demonstrations for 54 tasks covering five essential scientific literature understanding capabilities: information extraction, summarization, question answering, claim verification, and classification.
no code implementations • 23 Apr 2024 • Ansong Ni, Miltiadis Allamanis, Arman Cohan, Yinlin Deng, Kensen Shi, Charles Sutton, Pengcheng Yin
A fundamental skill among human developers is the ability to understand and reason about program execution.
1 code implementation • 4 Apr 2024 • Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, Rui Zhang
This work introduces ReaLMistake, the first error detection benchmark consisting of objective, realistic, and diverse errors made by LLMs.
no code implementations • 3 Apr 2024 • Chunyuan Deng, Xiangru Tang, Yilun Zhao, Hanming Wang, Haoran Wang, Wangchunshu Zhou, Arman Cohan, Mark Gerstein
Recently, large language models (LLMs) have evolved into interactive agents, proficient in planning, tool use, and task execution across a wide variety of tasks.
1 code implementation • 22 Mar 2024 • Orion Weller, Benjamin Chang, Sean MacAvaney, Kyle Lo, Arman Cohan, Benjamin Van Durme, Dawn Lawrie, Luca Soldaini
First, we introduce our dataset FollowIR, which contains a rigorous instruction evaluation benchmark as well as a training set for helping IR models learn to better follow real-world instructions.
1 code implementation • 9 Mar 2024 • Lorenzo Jaime Yu Flores, Arman Cohan
We study the behavior of the underlying losses between factual and non-factual examples, to understand and refine the performance of LT. We demonstrate that LT's performance is limited when the underlying assumption that noisy targets have higher NLL loss is not satisfied, and find that word-level NLL among entities provides better signal for distinguishing factuality.
1 code implementation • 6 Mar 2024 • Martin Riddell, Ansong Ni, Arman Cohan
While large language models have achieved remarkable performance on various code generation benchmarks, there have been growing concerns regarding potential contamination of these benchmarks as they may be leaked into pretraining and finetuning data.
1 code implementation • 9 Feb 2024 • Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, Bhuwan Dhingra
Addressing this gap, we introduce a unified calibration framework, in which both the correctness of the LLMs' responses and their associated confidence levels are treated as distributions across a range of scores.
no code implementations • 6 Feb 2024 • Xiangru Tang, Qiao Jin, Kunlun Zhu, Tongxin Yuan, Yichi Zhang, Wangchunshu Zhou, Meng Qu, Yilun Zhao, Jian Tang, Zhuosheng Zhang, Arman Cohan, Zhiyong Lu, Mark Gerstein
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
3 code implementations • 1 Feb 2024 • Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi
Given the importance of these details in scientifically studying these models, including their biases and potential risks, we believe it is essential for the research community to have access to powerful, truly open LMs.
no code implementations • 26 Dec 2023 • Jacob Dunefsky, Arman Cohan
A key goal of current mechanistic interpretability research in NLP is to find linear features (also called "feature vectors") for transformers: directions in activation space corresponding to concepts that are used by a given model in its computation.
1 code implementation • 16 Nov 2023 • Xiangru Tang, Anni Zou, Zhuosheng Zhang, Ziming Li, Yilun Zhao, Xingyao Zhang, Arman Cohan, Mark Gerstein
Large language models (LLMs), despite their remarkable progress across various general domains, encounter significant barriers in medicine and healthcare.
1 code implementation • 16 Nov 2023 • Yilun Zhao, Yitao Long, Hongjun Liu, Ryo Kamoi, Linyong Nan, Lyuhao Chen, Yixin Liu, Xiangru Tang, Rui Zhang, Arman Cohan
Recent LLMs have demonstrated remarkable performance in solving exam-like math word problems.
1 code implementation • 16 Nov 2023 • Xiangru Tang, Yuliang Liu, Zefan Cai, Yanjun Shao, Junjie Lu, Yichi Zhang, Zexuan Deng, Helan Hu, Kaikai An, Ruijun Huang, Shuzheng Si, Sheng Chen, Haozhe Zhao, Liang Chen, Yan Wang, Tianyu Liu, Zhiwei Jiang, Baobao Chang, Yin Fang, Yujia Qin, Wangchunshu Zhou, Yilun Zhao, Arman Cohan, Mark Gerstein
Despite Large Language Models (LLMs) like GPT-4 achieving impressive results in function-level code generation, they struggle with repository-scale code understanding (e. g., coming up with the right arguments for calling routines), requiring a deeper comprehension of complex file interactions.
no code implementations • 16 Nov 2023 • Linyong Nan, Ellen Zhang, Weijin Zou, Yilun Zhao, Wenfei Zhou, Arman Cohan
A key discovery is the identification of two primary bottlenecks hindering effective interaction: the capacity for planning and the ability to generate multiple SQL queries.
1 code implementation • 16 Nov 2023 • Yilun Zhao, Hongjun Liu, Yitao Long, Rui Zhang, Chen Zhao, Arman Cohan
Finally, we evaluate a wide spectrum of 44 LLMs with both Chain-of-Thought and Program-of-Thought prompting methods.
1 code implementation • 16 Nov 2023 • Hyunji Lee, Luca Soldaini, Arman Cohan, Minjoon Seo, Kyle Lo
Prevailing research practice today often relies on training dense retrievers on existing large datasets such as MSMARCO and then experimenting with ways to improve zero-shot generalization capabilities to unseen domains.
no code implementations • 16 Nov 2023 • Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, Arman Cohan
Recent observations have underscored a disparity between the inflated benchmark scores and the actual performance of LLMs, raising concerns about potential contamination of evaluation benchmarks.
1 code implementation • 15 Nov 2023 • Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, PengFei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan
Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) no LLM-based evaluation methods can achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation capabilities.
1 code implementation • 17 Oct 2023 • Lorenzo Jaime Yu Flores, Heyuan Huang, Kejian Shi, Sophie Chheang, Arman Cohan
Text simplification has emerged as an increasingly useful application of AI for bridging the communication gap in specialized fields such as medicine, where the lexicon is often dominated by technical jargon and complex constructs.
no code implementations • 29 Sep 2023 • Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Riddell, Troy Feng, Rui Shen, Stephen Yin, Ye Liu, Semih Yavuz, Caiming Xiong, Shafiq Joty, Yingbo Zhou, Dragomir Radev, Arman Cohan
Recently, large language models (LLMs), especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs in a few-shot or even zero-shot manner.
1 code implementation • 16 Sep 2023 • Yijie Zhou, Kejian Shi, Wencai Zhang, Yixin Liu, Yilun Zhao, Arman Cohan
Open-domain Multi-Document Summarization (ODMDS) is a critical tool for condensing vast arrays of documents into coherent, concise summaries.
1 code implementation • 16 Sep 2023 • Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, Mark Gerstein
Despite the remarkable capabilities of Large Language Models (LLMs) like GPT-4, producing complex, structured tabular data remains challenging.
1 code implementation • 15 Sep 2023 • Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie, Benjamin Van Durme, Arman Cohan, Luca Soldaini
Using large language models (LMs) for query or document expansion can improve generalization in information retrieval.
1 code implementation • 24 May 2023 • Avi Caciularu, Matthew E. Peters, Jacob Goldberger, Ido Dagan, Arman Cohan
The integration of multi-document pre-training objectives into language models has resulted in remarkable improvements in multi-document downstream tasks.
2 code implementations • 24 May 2023 • Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan, Xiangru Tang, Arman Cohan
These include the LogicNLG and our newly-constructed LoTNLG datasets for data insight generation, along with the FeTaQA and our newly-constructed F2WTQ datasets for query-based generation.
no code implementations • 24 May 2023 • Benjamin Newman, Luca Soldaini, Raymond Fok, Arman Cohan, Kyle Lo
Many real-world applications (e. g., note taking, search) require extracting a sentence or paragraph from a document and showing that snippet to a human outside of the source document.
1 code implementation • 23 May 2023 • Yixin Liu, Kejian Shi, Katherine S He, Longtian Ye, Alexander R. Fabbri, PengFei Liu, Dragomir Radev, Arman Cohan
Recent studies have found that summaries generated by large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets.
2 code implementations • 23 May 2023 • Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, Arman Cohan
Motivated by this, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary.
no code implementations • 21 May 2023 • Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, Dragomir Radev
In-context learning (ICL) has emerged as a new approach to various natural language processing tasks, utilizing large language models (LLMs) to make predictions based on context that has been supplemented with a few examples or task-specific instructions.
1 code implementation • 19 May 2023 • Revanth Gangi Reddy, Pradeep Dasigi, Md Arafat Sultan, Arman Cohan, Avirup Sil, Heng Ji, Hannaneh Hajishirzi
Retrieve-and-rerank is a prevalent framework in neural information retrieval, wherein a bi-encoder network initially retrieves a pre-defined number of candidates (e. g., K=100), which are then reranked by a more powerful cross-encoder model.
2 code implementations • 15 May 2023 • Rabeeh Karimi Mahabadi, Hamish Ivison, Jaesung Tae, James Henderson, Iz Beltagy, Matthew E. Peters, Arman Cohan
Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various continuous domains.
1 code implementation • 30 Jan 2023 • Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo
Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores?
1 code implementation • 24 Jan 2023 • Rodney Kinney, Chloe Anastasiades, Russell Authur, Iz Beltagy, Jonathan Bragg, Alexandra Buraczynski, Isabel Cachola, Stefan Candra, Yoganand Chandrasekhar, Arman Cohan, Miles Crawford, Doug Downey, Jason Dunkelberger, Oren Etzioni, Rob Evans, Sergey Feldman, Joseph Gorney, David Graham, Fangzhou Hu, Regan Huff, Daniel King, Sebastian Kohlmeier, Bailey Kuehl, Michael Langan, Daniel Lin, Haokun Liu, Kyle Lo, Jaron Lochner, Kelsey MacMillan, Tyler Murray, Chris Newell, Smita Rao, Shaurya Rohatgi, Paul Sayre, Zejiang Shen, Amanpreet Singh, Luca Soldaini, Shivashankar Subramanian, Amber Tanaka, Alex D. Wade, Linda Wagner, Lucy Lu Wang, Chris Wilhelm, Caroline Wu, Jiangjiang Yang, Angele Zamarron, Madeleine van Zuylen, Daniel S. Weld
The volume of scientific output is creating an urgent need for automated tools to help scientists keep up with developments in their field.
no code implementations • 20 Dec 2022 • John Giorgi, Luca Soldaini, Bo wang, Gary Bader, Kyle Lo, Lucy Lu Wang, Arman Cohan
Via extensive automatic and human evaluation, we determine: (1) state-of-the-art summarizers suffer large reductions in performance when applied to open-domain MDS, (2) additional training in the open-domain setting can reduce this sensitivity to imperfect retrieval, and (3) summarizers are insensitive to the retrieval of duplicate documents and the order of retrieved documents, but highly sensitive to other errors, like the retrieval of irrelevant documents.
Ranked #1 on Multi-Document Summarization on MS^2
2 code implementations • 23 Nov 2022 • Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
In response, we introduce SciRepEval, the first comprehensive benchmark for training and evaluating scientific document representations.
1 code implementation • 25 Oct 2022 • David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Iz Beltagy, Lucy Lu Wang, Hannaneh Hajishirzi
While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic setting against large corpora of scientific literature.
1 code implementation • 2 Sep 2022 • Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo, Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander R. Fabbri, Wojciech Kryscinski, Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Rex Ying, Arman Cohan, Dragomir Radev
We present FOLIO, a human-annotated, logically complex and diverse dataset for reasoning in natural language (NL), equipped with first-order logic (FOL) annotations.
1 code implementation • 11 Jul 2022 • Jon Saad-Falcon, Amanpreet Singh, Luca Soldaini, Mike D'Arcy, Arman Cohan, Doug Downey
Real-world applications of neural language models often involve running many different models over the same corpus.
1 code implementation • ACL 2022 • Thong Nguyen, Andrew Yates, Ayah Zirikly, Bart Desmet, Arman Cohan
In dataset-transfer experiments on three social media datasets, we find that grounding the model in PHQ9's symptoms substantially improves its ability to generalize to out-of-distribution data compared to a standard BERT-based approach.
1 code implementation • ACL 2022 • Dustin Wright, David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Isabelle Augenstein, Lucy Lu Wang
To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims.
2 code implementations • NAACL 2022 • Avi Caciularu, Ido Dagan, Jacob Goldberger, Arman Cohan
Long-context question answering (QA) tasks require reasoning over a long document or multiple documents.
3 code implementations • Findings (NAACL) 2022 • David Wadden, Kyle Lo, Lucy Lu Wang, Arman Cohan, Iz Beltagy, Hannaneh Hajishirzi
Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero / few-shot domain adaptation experiments.
1 code implementation • NAACL 2022 • Sheshera Mysore, Arman Cohan, Tom Hope
We present a new scientific document similarity model based on matching fine-grained aspects of texts.
3 code implementations • ACL 2022 • Wen Xiao, Iz Beltagy, Giuseppe Carenini, Arman Cohan
We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data.
Ranked #1 on Multi-Document Summarization on Multi-News
2 code implementations • NeurIPS 2021 • Jonathan Bragg, Arman Cohan, Kyle Lo, Iz Beltagy
Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design.
1 code implementation • NAACL 2022 • Anne Lauscher, Brandon Ko, Bailey Kuehl, Sophie Johnson, David Jurgens, Arman Cohan, Kyle Lo
In our work, we address this research gap by proposing a novel framework for CCA as a document-level context extraction and labeling task.
1 code implementation • NAACL 2021 • Iz Beltagy, Arman Cohan, Hannaneh Hajishirzi, Sewon Min, Matthew E. Peters
In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for document-level representation learning.
1 code implementation • NAACL 2021 • Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner
Readers of academic research papers often read with the goal of answering specific questions.
Ranked #1 on Evidence Selection on QASPER
1 code implementation • 3 Mar 2021 • Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, Nazli Goharian
Managing the data for Information Retrieval (IR) experiments can be challenging.
2 code implementations • Findings (EMNLP) 2021 • Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E. Peters, Arie Cattan, Ido Dagan
We introduce a new pretraining approach geared for multi-document language modeling, incorporating two key ideas into the masked language modeling self-supervised objective.
Ranked #1 on Citation Recommendation on AAN test
1 code implementation • 28 Dec 2020 • Sajad Sotudeh, Arman Cohan, Nazli Goharian
We then present our results on three long summarization datasets, arXiv-Long, PubMed-Long, and Longsumm.
Ranked #1 on Extended Summarization on arXiv-Long Test
1 code implementation • 11 Dec 2020 • Daniel Khashabi, Arman Cohan, Siamak Shakeri, Pedram Hosseini, Pouya Pezeshkpour, Malihe Alikhani, Moin Aminnaseri, Marzieh Bitaab, Faeze Brahman, Sarik Ghazarian, Mozhdeh Gheini, Arman Kabiri, Rabeeh Karimi Mahabadi, Omid Memarrast, Ahmadreza Mosallanezhad, Erfan Noury, Shahab Raji, Mohammad Sadegh Rasooli, Sepideh Sadeghi, Erfan Sadeqi Azer, Niloofar Safi Samghabadi, Mahsa Shafaei, Saber Sheybani, Ali Tazarv, Yadollah Yaghoobzadeh
Despite the progress made in recent years in addressing natural language understanding (NLU) challenges, the majority of this progress remains to be concentrated on resource-rich languages like English.
2 code implementations • 2 Nov 2020 • Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, Arman Cohan
Pretrained contextualized language models such as BERT and T5 have established a new state-of-the-art for ad-hoc search.
no code implementations • EMNLP 2020 • Sean MacAvaney, Arman Cohan, Nazli Goharian
With worldwide concerns surrounding the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), there is a rapidly growing body of scientific literature on the virus.
1 code implementation • 5 May 2020 • Sean MacAvaney, Arman Cohan, Nazli Goharian
In this work, we present a search system called SLEDGE, which utilizes SciBERT to effectively re-rank articles.
4 code implementations • Findings of the Association for Computational Linguistics 2020 • Isabel Cachola, Kyle Lo, Arman Cohan, Daniel S. Weld
We introduce TLDR generation, a new form of extreme summarization, for scientific papers.
2 code implementations • EMNLP 2020 • David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, Hannaneh Hajishirzi
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUPPORTS or REFUTES a given scientific claim, and to identify rationales justifying each decision.
5 code implementations • ACL 2020 • Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld
We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.
Ranked #1 on Document Classification on SciDocs (MAG)
22 code implementations • 10 Apr 2020 • Iz Beltagy, Matthew E. Peters, Arman Cohan
To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.
Ranked #2 on Question Answering on WikiHop
no code implementations • 18 Jan 2020 • Sean MacAvaney, Arman Cohan, Nazli Goharian, Ross Filice
This allows medical practitioners to easily identify and learn from the reports in which their interpretation most substantially differed from that of the attending physician (who finalized the report).
1 code implementation • ACL 2020 • Lucy Lu Wang, Oyvind Tafjord, Arman Cohan, Sarthak Jain, Sam Skjonsberg, Carissa Schoenick, Nick Botner, Waleed Ammar
We fine-tune the contextualized word representations of the RoBERTa language model using labeled DDI data, and apply the fine-tuned model to identify supplement interactions.
1 code implementation • IJCNLP 2019 • Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Daniel S. Weld
As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document.
no code implementations • 14 May 2019 • Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, Ross W. Filice
Automatically generating accurate summaries from clinical reports could save a clinician's time, improve summary coverage, and reduce errors.
7 code implementations • 15 Apr 2019 • Sean MacAvaney, Andrew Yates, Arman Cohan, Nazli Goharian
We call this joint approach CEDR (Contextualized Embeddings for Document Ranking).
Ranked #3 on Ad-Hoc Information Retrieval on TREC Robust04
1 code implementation • NAACL 2019 • Arman Cohan, Waleed Ammar, Madeleine van Zuylen, Field Cady
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
Ranked #2 on Sentence Classification on SciCite
5 code implementations • IJCNLP 2019 • Iz Beltagy, Kyle Lo, Arman Cohan
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Ranked #1 on Sentence Classification on Paper Field (using extra training data)
no code implementations • WS 2018 • Sean MacAvaney, Bart Desmet, Arman Cohan, Luca Soldaini, Andrew Yates, Ayah Zirikly, Nazli Goharian
Self-reported diagnosis statements have been widely employed in studying language related to mental health in social media.
1 code implementation • COLING 2018 • Arman Cohan, Bart Desmet, Andrew Yates, Luca Soldaini, Sean MacAvaney, Nazli Goharian
Mental health is a significant and growing public health concern.
no code implementations • WS 2018 • Luca Soldaini, Timothy Walsh, Arman Cohan, Julien Han, Nazli Goharian
In recent years, online communities have formed around suicide and self-harm prevention.
2 code implementations • NAACL 2018 • Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian
Neural abstractive summarization models have led to promising results in summarizing relatively short documents.
Ranked #4 on Unsupervised Extractive Summarization on Pubmed
1 code implementation • SEMEVAL 2018 • Sean MacAvaney, Luca Soldaini, Arman Cohan, Nazli Goharian
SemEval 2018 Task 7 focuses on relation ex- traction and classification in scientific literature.
no code implementations • EMNLP 2017 • Andrew Yates, Arman Cohan, Nazli Goharian
We propose methods for identifying posts in support communities that may indicate a risk of self-harm, and demonstrate that our approach outperforms strong previously proposed methods for identifying such posts.
no code implementations • 15 Aug 2017 • Arman Cohan, Allan Fong, Raj Ratwani, Nazli Goharian
Preventable medical errors are estimated to be among the leading causes of injury and death in the United States.
no code implementations • SEMEVAL 2017 • Sean MacAvaney, Arman Cohan, Nazli Goharian
Clinical TempEval 2017 (SemEval 2017 Task 12) addresses the task of cross-domain temporal extraction from clinical text.
no code implementations • 12 Jun 2017 • Arman Cohan, Nazli Goharian
We present a framework for scientific summarization which takes advantage of the citations and the scientific discourse structure.
no code implementations • 23 May 2017 • Arman Cohan, Nazli Goharian
Citation texts are sometimes not very informative or in some cases inaccurate by themselves; they need the appropriate context from the referenced paper to reflect its exact contributions.
1 code implementation • EMNLP 2015 • Arman Cohan, Nazli Goharian
We propose a summarization approach for scientific articles which takes advantage of citation-context and the document discourse model.
no code implementations • 23 Feb 2017 • Arman Cohan, Allan Fong, Nazli Goharian, Raj Ratwani
Medical errors are leading causes of death in the US and as such, prevention of these errors is paramount to promoting health care.
no code implementations • 22 Feb 2017 • Arman Cohan, Sydney Young, Andrew Yates, Nazli Goharian
Our analysis on the interaction of the moderators with the users further indicates that without an automatic way to identify critical content, it is indeed challenging for the moderators to provide timely response to the users in need.
1 code implementation • LREC 2016 • Arman Cohan, Nazli Goharian
Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries.