IJCNLP 2019

OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction

IJCNLP 2019 thunlp/OpenNRE

OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE).

INFORMATION RETRIEVAL QUESTION ANSWERING RELATION EXTRACTION

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

IJCNLP 2019 UKPLab/sentence-transformers

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

SEMANTIC SIMILARITY SEMANTIC TEXTUAL SIMILARITY SENTENCE EMBEDDINGS TRANSFER LEARNING

NeuronBlocks: Building Your NLP DNN Models Like Playing Lego

IJCNLP 2019 Microsoft/NeuronBlocks

Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.

Show Your Work: Improved Reporting of Experimental Results

IJCNLP 2019 DerwenAI/pytextrank

Research in natural language processing proceeds, in part, by demonstrating that new models achieve superior performance (e. g., accuracy) on held-out test data, compared to previous results.

UER: An Open-Source Toolkit for Pre-training Models

IJCNLP 2019 dbiir/UER-py

Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks.

Text Summarization with Pretrained Encoders

IJCNLP 2019 nlpyang/PreSumm

For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not).

 SOTA for Document Summarization on CNN / Daily Mail (using extra training data)

ABSTRACTIVE TEXT SUMMARIZATION DOCUMENT SUMMARIZATION EXTRACTIVE DOCUMENT SUMMARIZATION

Language Models as Knowledge Bases?

IJCNLP 2019 facebookresearch/LAMA

Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks.

LANGUAGE MODELLING OPEN-DOMAIN QUESTION ANSWERING

Learning to Copy for Automatic Post-Editing

IJCNLP 2019 THUNLP-MT/THUMT

To better identify translation errors, our method learns the representations of source sentences and system outputs in an interactive way.

AUTOMATIC POST-EDITING MACHINE TRANSLATION