ACL 2018

Hierarchical Neural Story Generation

ACL 2018 pytorch/fairseq

We explore story generation: creative systems that can build coherent and fluent passages of text about a topic.

Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates

ACL 2018 google/sentencepiece

Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT).

LANGUAGE MODELLING MACHINE TRANSLATION

Chinese NER Using Lattice LSTM

ACL 2018 jiesutd/LatticeLSTM

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon.

CHINESE NAMED ENTITY RECOGNITION

Marian: Fast Neural Machine Translation in C++

ACL 2018 marian-nmt/marian

We present Marian, an efficient and self-contained Neural Machine Translation framework with an integrated automatic differentiation engine based on dynamic computation graphs.

MACHINE TRANSLATION

Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting

ACL 2018 ChenRocks/fast_abs_rl

Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i. e., compresses and paraphrases) to generate a concise overall summary.

ABSTRACTIVE TEXT SUMMARIZATION

A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings

ACL 2018 artetxem/vecmap

Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.

WORD EMBEDDINGS

Simple and Effective Multi-Paragraph Reading Comprehension

ACL 2018 allenai/document-qa

We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.

QUESTION ANSWERING READING COMPREHENSION

Constituency Parsing with a Self-Attentive Encoder

ACL 2018 nikitakit/self-attentive-parser

We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser.

CONSTITUENCY PARSING