Search Results for author: Alexander R. Fabbri

Found 30 papers, 22 papers with code

Sarcasm Analysis using Conversation Context

no code implementations CL 2018 Debanjan Ghosh, Alexander R. Fabbri, Smaranda Muresan

To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the current turn.

Sarcasm Detection Sentence

What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain Learning

no code implementations26 Nov 2018 Irene Li, Alexander R. Fabbri, Robert R. Tung, Dragomir R. Radev

The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation.

Creating A Neural Pedagogical Agent by Jointly Learning to Review and Assess

2 code implementations26 Jun 2019 Youngnam Lee, Youngduck Choi, Junghyun Cho, Alexander R. Fabbri, HyunBin Loh, Chanyou Hwang, Yongku Lee, Sang-Wook Kim, Dragomir Radev

Our model outperforms existing approaches over several metrics in predicting user response correctness, notably out-performing other methods on new users without large question-response histories.

Machine Translation TAG

ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks

1 code implementation4 Sep 2019 Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R. Fabbri, Irene Li, Dan Friedman, Dragomir R. Radev

Scientific article summarization is challenging: large, annotated corpora are not available, and the summary should ideally include the article's impacts on research community.

Scientific Document Summarization

Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering

1 code implementation ACL 2020 Alexander R. Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, Bing Xiang

Training a QA model on this data gives a relative improvement over a previous unsupervised model in F1 score on the SQuAD dataset by about 14%, and 20% when the answer is a named entity, achieving state-of-the-art performance on SQuAD for unsupervised QA.

Language Modelling Question Answering +3

SummEval: Re-evaluating Summarization Evaluation

5 code implementations24 Jul 2020 Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, Dragomir Radev

The scarcity of comprehensive up-to-date studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress.

Text Summarization

Multi-Perspective Abstractive Answer Summarization

no code implementations17 Apr 2021 Alexander R. Fabbri, Xiaojian Wu, Srini Iyer, Mona Diab

A major obstacle for multi-perspective, abstractive answer summarization is the absence of a dataset to provide supervision for producing such summaries.

Community Question Answering Sentence

ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining

1 code implementation ACL 2021 Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev

While online conversations can cover a vast amount of information in many different formats, abstractive text summarization has primarily focused on modeling solely news articles.

Abstractive Text Summarization Argument Mining +2

CaPE: Contrastive Parameter Ensembling for Reducing Hallucination in Abstractive Summarization

no code implementations14 Oct 2021 Prafulla Kumar Choubey, Alexander R. Fabbri, Jesse Vig, Chien-Sheng Wu, Wenhao Liu, Nazneen Fatema Rajani

Then, we fine-tune a base summarization model, which is trained on all training samples, on the clean (noisy) subset to obtain an \textit{expert} (\textit{anti-expert}) model.

Abstractive Text Summarization Hallucination +1

Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand

2 code implementations NAACL 2022 Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith

We therefore propose a generalization of leaderboards, bidimensional leaderboards (Billboards), that simultaneously tracks progress in language generation models and metrics for their evaluation.

Image Captioning Machine Translation +1

Exploring Neural Models for Query-Focused Summarization

1 code implementation Findings (NAACL) 2022 Jesse Vig, Alexander R. Fabbri, Wojciech Kryściński, Chien-Sheng Wu, Wenhao Liu

Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization.

Query-focused Summarization Transfer Learning

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

1 code implementation25 May 2022 Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kryściński, Justin F. Rousseau, Greg Durrett

We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.

Abstractive Text Summarization

Improving Factual Consistency in Summarization with Compression-Based Post-Editing

1 code implementation11 Nov 2022 Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, Caiming Xiong

We propose to use sentence-compression data to train the post-editing model to take a summary with extrinsic entity errors marked with special tokens and output a compressed, well-formed summary with those errors removed.

Informativeness Sentence +1

Prompted Opinion Summarization with GPT-3.5

1 code implementation29 Nov 2022 Adithya Bhaskar, Alexander R. Fabbri, Greg Durrett

Large language models have shown impressive performance across a wide variety of tasks, including text summarization.

Opinion Summarization

Socratic Pretraining: Question-Driven Pretraining for Controllable Summarization

1 code implementation20 Dec 2022 Artidoro Pagnoni, Alexander R. Fabbri, Wojciech Kryściński, Chien-Sheng Wu

In long document controllable summarization, where labeled data is scarce, pretrained models struggle to adapt to the task and effectively respond to user queries.

Question Generation Question-Generation

Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation

1 code implementation7 Mar 2023 Yixin Liu, Alexander R. Fabbri, Yilun Zhao, PengFei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev

Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics.

LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

1 code implementation23 May 2023 Philippe Laban, Wojciech Kryściński, Divyansh Agarwal, Alexander R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu

To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SummEdits.

Misinformation

On Learning to Summarize with Large Language Models as References

1 code implementation23 May 2023 Yixin Liu, Kejian Shi, Katherine S He, Longtian Ye, Alexander R. Fabbri, PengFei Liu, Dragomir Radev, Arman Cohan

Meanwhile, we perform a meta-analysis on this new learning setting that reveals a discrepancy between human and LLM-based evaluation, highlighting the benefits and risks of this LLM-as-reference setting we investigated.

Contrastive Learning Text Summarization

Generating EDU Extracts for Plan-Guided Summary Re-Ranking

1 code implementation28 May 2023 Griffin Adams, Alexander R. Fabbri, Faisal Ladhak, Kathleen McKeown, Noémie Elhadad

Similarly, on 1k samples from CNN / DM, we show that prompting GPT-3 to follow EDU plans outperforms sampling-based methods by 1. 05 ROUGE-2 F1 points.

Language Modelling Re-Ranking

Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization

1 code implementation15 Nov 2023 Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, PengFei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan

Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) all LLM-based evaluation methods cannot achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation.

Benchmarking Text Summarization

Lexical Repetitions Lead to Rote Learning: Unveiling the Impact of Lexical Overlap in Train and Test Reference Summaries

no code implementations15 Nov 2023 Prafulla Kumar Choubey, Alexander R. Fabbri, Caiming Xiong, Chien-Sheng Wu

Ideal summarization models should generalize to novel summary-worthy content without remembering reference training summaries by rote.

Cannot find the paper you are looking for? You can Submit a new open access paper.