Search Results for author: Iz Beltagy

Found 16 papers, 14 papers with code

FLEX: Unifying Evaluation for Few-Shot NLP

2 code implementations15 Jul 2021 Jonathan Bragg, Arman Cohan, Kyle Lo, Iz Beltagy

Few-shot NLP research is highly active, yet conducted in disjoint research threads with evaluation suites that lack challenging-yet-realistic testing setups and fail to employ careful experimental design.

Experimental Design Few-Shot Learning +1

Beyond Paragraphs: NLP for Long Sequences

no code implementations NAACL 2021 Iz Beltagy, Arman Cohan, Hannaneh Hajishirzi, Sewon Min, Matthew E. Peters

In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for document-level representation learning.

Document-level Representation Learning

MS2: Multi-Document Summarization of Medical Studies

1 code implementation13 Apr 2021 Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, Lucy Lu Wang

In support of this goal, we release MS^2 (Multi-Document Summarization of Medical Studies), a dataset of over 470k documents and 20k summaries derived from the scientific literature.

Document Summarization Multi-Document Summarization

CDLM: Cross-Document Language Modeling

2 code implementations2 Jan 2021 Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E. Peters, Arie Cattan, Ido Dagan

We introduce a new pretraining approach geared for multi-document language modeling, incorporating two key ideas into the masked language modeling self-supervised objective.

Citation Recommendation Coreference Resolution +5

SciREX: A Challenge Dataset for Document-Level Information Extraction

1 code implementation ACL 2020 Sarthak Jain, Madeleine van Zuylen, Hannaneh Hajishirzi, Iz Beltagy

It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections.


SPECTER: Document-level Representation Learning using Citation-informed Transformers

3 code implementations ACL 2020 Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld

We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.

Citation Prediction Document Classification +4

Longformer: The Long-Document Transformer

7 code implementations10 Apr 2020 Iz Beltagy, Matthew E. Peters, Arman Cohan

To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.

Language Modelling Question Answering

Pretrained Language Models for Sequential Sentence Classification

1 code implementation IJCNLP 2019 Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Daniel S. Weld

As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document.

Classification Document-level +2

ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing

1 code implementation WS 2019 Mark Neumann, Daniel King, Iz Beltagy, Waleed Ammar

Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift.

Combining Distant and Direct Supervision for Neural Relation Extraction

1 code implementation NAACL 2019 Iz Beltagy, Kyle Lo, Waleed Ammar

In relation extraction with distant supervision, noisy labels make it difficult to train quality models.

Relation Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.