Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design.
Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes.
In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for document-level representation learning.
Readers of academic research papers often read with the goal of answering specific questions.
Ranked #1 on Question Answering on QASPER
Determining coreference of concept mentions across multiple documents is a fundamental task in natural language understanding.
In support of this goal, we release MS^2 (Multi-Document Summarization of Medical Studies), a dataset of over 470k documents and 20k summaries derived from the scientific literature.
We introduce a new pretraining approach geared for multi-document language modeling, incorporating two key ideas into the masked language modeling self-supervised objective.
Ranked #1 on Cross-Document Language Modeling on MultiNews test
It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections.
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP.
We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.
To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.
Ranked #2 on Question Answering on WikiHop
As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document.
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Ranked #1 on Participant Intervention Comparison Outcome Extraction on EBM-NLP (using extra training data)
Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift.
no code implementations • • Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Peters, Joanna Power, Sam Skjonsberg, Lucy Lu Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, Oren Etzioni
We describe a deployed scalable system for organizing published scientific literature into a heterogeneous graph to facilitate algorithmic manipulation and discovery.