no code implementations • WS 2019 • Sebastian Gehrmann, Zachary Ziegler, Alex Rush, er
Neural abstractive document summarization is commonly approached by models that exhibit a mostly extractive behavior.
no code implementations • WS 2018 • Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, Alex Rush, er
Neural attention-based sequence-to-sequence models (seq2seq) (Sutskever et al., 2014; Bahdanau et al., 2014) have proven to be accurate and robust for many sequence prediction tasks.
1 code implementation • EMNLP 2018 • Luke Melas-Kyriazi, Alex Rush, er, George Han
Image paragraph captioning models aim to produce detailed descriptions of a source image.
1 code implementation • WS 2018 • Alex Rush, er
A major goal of open-source NLP is to quickly and accurately reproduce the results of new work, in a manner that the community can easily use and modify.
no code implementations • WS 2018 • Jean Senellart, Dakun Zhang, Bo wang, Guillaume Klein, Ramatch, Jean-Pierre irin, Josep Crego, Alex Rush, er
We present a system description of the OpenNMT Neural Machine Translation entry for the WNMT 2018 evaluation.
no code implementations • WS 2017 • Jeffrey Ling, Alex Rush, er
Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization.
Ranked #25 on Document Summarization on CNN / Daily Mail