IJCNLP 2019

OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction

IJCNLP 2019 thunlp/OpenNRE

OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE).

INFORMATION RETRIEVAL QUESTION ANSWERING RELATION EXTRACTION

NeuronBlocks: Building Your NLP DNN Models Like Playing Lego

IJCNLP 2019 Microsoft/NeuronBlocks

Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.

Show Your Work: Improved Reporting of Experimental Results

IJCNLP 2019 DerwenAI/pytextrank

Research in natural language processing proceeds, in part, by demonstrating that new models achieve superior performance (e. g., accuracy) on held-out test data, compared to previous results.

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

IJCNLP 2019 UKPLab/sentence-transformers

However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10, 000 sentences requires about 50 million inference computations (~65 hours) with BERT.

REGRESSION SEMANTIC SIMILARITY SEMANTIC TEXTUAL SIMILARITY SENTENCE EMBEDDINGS TRANSFER LEARNING

Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs

IJCNLP 2019 jsalt18-sentence-repl/jiant

We conclude that a variety of methods is necessary to reveal all relevant aspects of a model's grammatical knowledge in a given domain.

UER: An Open-Source Toolkit for Pre-training Models

IJCNLP 2019 dbiir/UER-py

Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks.

Learning to Copy for Automatic Post-Editing

IJCNLP 2019 THUNLP-MT/THUMT

To better identify translation errors, our method learns the representations of source sentences and system outputs in an interactive way.

AUTOMATIC POST-EDITING MACHINE TRANSLATION

Language Models as Knowledge Bases?

IJCNLP 2019 facebookresearch/LAMA

Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks.

LANGUAGE MODELLING OPEN-DOMAIN QUESTION ANSWERING