no code implementations • EMNLP (newsum) 2021 • Khalil Mrini, Can Liu, Markus Dreyer
We introduce a deep reinforcement learning approach to topic-focused abstractive summarization, trained on rewards with a novel negative example baseline.
Abstractive Text Summarization Deep Reinforcement Learning +1
no code implementations • 28 Feb 2024 • Alyssa Hwang, Kalpit Dixit, Miguel Ballesteros, Yassine Benajiba, Vittorio Castelli, Markus Dreyer, Mohit Bansal, Kathleen McKeown
We present NewsQs (news-cues), a dataset that provides question-answer pairs for multiple news documents.
1 code implementation • 24 Oct 2023 • Adithya Pratapa, Kevin Small, Markus Dreyer
Generating concise summaries of news events is a challenging natural language processing task.
1 code implementation • 16 Oct 2023 • Leonardo F. R. Ribeiro, Mohit Bansal, Markus Dreyer
Readability refers to how easily a reader can understand a written text.
no code implementations • 4 Jul 2023 • Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
1 code implementation • 6 Mar 2023 • David Wan, Mengwen Liu, Kathleen McKeown, Markus Dreyer, Mohit Bansal
We present a systematic study of the effect of generation techniques such as beam search and nucleus sampling on faithfulness in abstractive summarization.
1 code implementation • Findings (NAACL) 2022 • Arthur Bražinskas, Ramesh Nallapati, Mohit Bansal, Markus Dreyer
In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets.
3 code implementations • NAACL 2022 • Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal
Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications.
no code implementations • 5 Aug 2021 • Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, Sujith Ravi
Neural models for abstractive summarization tend to generate output that is fluent and well-formed but lacks semantic faithfulness, or factuality, with respect to the input documents.
1 code implementation • NAACL 2021 • Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, Markus Dreyer
We also show improvements in a transfer-only setup on the DUC-2004 dataset.
no code implementations • 17 Apr 2021 • Arthur Bražinskas, Mengwen Liu, Ramesh Nallapati, Sujith Ravi, Markus Dreyer
This applies to scenarios such as a news publisher training a summarizer on dated news and summarizing incoming recent news.
no code implementations • ACL 2019 • Shiva Pentyala, Mengwen Liu, Markus Dreyer
We present methods for multi-task learning that take advantage of natural groupings of related tasks.
no code implementations • 1 Nov 2017 • Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn Hoffmeister, Markus Dreyer, Stanislav Peshterliev, Ankur Gandhe, Denis Filiminov, Ariya Rastrow, Christian Monson, Agnika Kumar
This paper presents the design of the machine learning architecture that underlies the Alexa Skills Kit (ASK) a large scale Spoken Language Understanding (SLU) Software Development Kit (SDK) that enables developers to extend the capabilities of Amazon's virtual assistant, Alexa.
no code implementations • WS 2017 • Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer
The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL).