EACL 2017

Using the Output Embedding to Improve Language Models

EACL 2017 lium-lst/nmtpy

We study the topmost weight matrix of neural network language models.

JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction

EACL 2017 keisks/jfleg

We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC).


Identifying beneficial task relations for multi-task learning in deep neural networks

EACL 2017 jbingel/eacl2017_mtl

Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data.


Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection

EACL 2017 vered1986/UnsupervisedHypernymy

The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution.


Parsing Universal Dependencies without training

EACL 2017 hectormartinez/ud_unsup_parser

We propose UDP, the first training-free parser for Universal Dependencies (UD).

Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings

EACL 2017 abhipec/fnet

Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types.