Search Results for author: Markus Dreyer

Found 17 papers, 6 papers with code

Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization

no code implementations EMNLP (newsum) 2021 Khalil Mrini, Can Liu, Markus Dreyer

We introduce a deep reinforcement learning approach to topic-focused abstractive summarization, trained on rewards with a novel negative example baseline.

Abstractive Text Summarization Deep Reinforcement Learning +1

Background Summarization of Event Timelines

1 code implementation24 Oct 2023 Adithya Pratapa, Kevin Small, Markus Dreyer

Generating concise summaries of news events is a challenging natural language processing task.

News Summarization Question Answering

On Conditional and Compositional Language Model Differentiable Prompting

no code implementations4 Jul 2023 Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer

Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.

Few-Shot Learning Language Modelling +1

Faithfulness-Aware Decoding Strategies for Abstractive Summarization

1 code implementation6 Mar 2023 David Wan, Mengwen Liu, Kathleen McKeown, Markus Dreyer, Mohit Bansal

We present a systematic study of the effect of generation techniques such as beam search and nucleus sampling on faithfulness in abstractive summarization.

Abstractive Text Summarization

FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations

3 code implementations NAACL 2022 Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal

Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications.

Abstractive Text Summarization ARC

Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization

no code implementations5 Aug 2021 Markus Dreyer, Mengwen Liu, Feng Nan, Sandeep Atluri, Sujith Ravi

Neural models for abstractive summarization tend to generate output that is fluent and well-formed but lacks semantic faithfulness, or factuality, with respect to the input documents.

Abstractive Text Summarization

Transductive Learning for Abstractive News Summarization

no code implementations17 Apr 2021 Arthur Bražinskas, Mengwen Liu, Ramesh Nallapati, Sujith Ravi, Markus Dreyer

This applies to scenarios such as a news publisher training a summarizer on dated news and summarizing incoming recent news.

Abstractive Text Summarization News Summarization +1

Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding

no code implementations1 Nov 2017 Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn Hoffmeister, Markus Dreyer, Stanislav Peshterliev, Ankur Gandhe, Denis Filiminov, Ariya Rastrow, Christian Monson, Agnika Kumar

This paper presents the design of the machine learning architecture that underlies the Alexa Skills Kit (ASK) a large scale Spoken Language Understanding (SLU) Software Development Kit (SDK) that enables developers to extend the capabilities of Amazon's virtual assistant, Alexa.

Spoken Language Understanding

Transfer Learning for Neural Semantic Parsing

no code implementations WS 2017 Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer

The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL).

Semantic Parsing Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.