Search Results for author: Alexander Fabbri

Found 10 papers, 4 papers with code

Fair Abstractive Summarization of Diverse Perspectives

1 code implementation14 Nov 2023 Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang

However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization.

Abstractive Text Summarization Fairness

From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

no code implementations8 Sep 2023 Griffin Adams, Alexander Fabbri, Faisal Ladhak, Eric Lehman, Noémie Elhadad

We conduct a human preference study on 100 CNN DailyMail articles and find that that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries.

Informativeness

Surfer100: Generating Surveys From Web Resources, Wikipedia-style

no code implementations LREC 2022 Irene Li, Alexander Fabbri, Rina Kawamura, Yixin Liu, Xiangru Tang, Jaesung Tae, Chang Shen, Sally Ma, Tomoe Mizutani, Dragomir Radev

Fast-developing fields such as Artificial Intelligence (AI) often outpace the efforts of encyclopedic sources such as Wikipedia, which either do not completely cover recently-introduced topics or lack such content entirely.

Language Modelling

Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries

no code implementations NAACL 2022 Xiangru Tang, Alexander Fabbri, Haoran Li, Ziming Mao, Griffin Thomas Adams, Borui Wang, Asli Celikyilmaz, Yashar Mehdad, Dragomir Radev

Current pre-trained models applied to summarization are prone to factual inconsistencies which either misrepresent the source text or introduce extraneous information.

R-VGAE: Relational-variational Graph Autoencoder for Unsupervised Prerequisite Chain Learning

1 code implementation COLING 2020 Irene Li, Alexander Fabbri, Swapnil Hingmire, Dragomir Radev

The task of concept prerequisite chain learning is to automatically determine the existence of prerequisite relationships among concept pairs.

Zero-shot Transfer Learning for Semantic Parsing

no code implementations27 Aug 2018 Javid Dadashkarimi, Alexander Fabbri, Sekhar Tatikonda, Dragomir R. Radev

In this paper we propose to use feature transfer in a zero-shot experimental setting on the task of semantic parsing.

Semantic Parsing Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.