Search Results for author: Philippe Laban

Found 27 papers, 16 papers with code

Are You Sure? Challenging LLMs Leads to Performance Drops in The FlipFlop Experiment

no code implementations14 Nov 2023 Philippe Laban, Lidiya Murakhovs'ka, Caiming Xiong, Chien-Sheng Wu

The interactive nature of Large Language Models (LLMs) theoretically allows models to refine and improve their answers, yet systematic analysis of the multi-turn behavior of LLMs remains limited.

Automatic and Human-AI Interactive Text Generation

no code implementations5 Oct 2023 Yao Dou, Philippe Laban, Claire Gardent, Wei Xu

In this tutorial, we focus on text-to-text generation, a class of natural language generation (NLG) tasks, that takes a piece of text as input and then generates a revision that is improved according to some specific criteria (e. g., readability or linguistic styles), while largely retaining the original meaning and the length of the text.

Paraphrase Generation Style Transfer +2

Beyond the Chat: Executable and Verifiable Text-Editing with LLMs

no code implementations27 Sep 2023 Philippe Laban, Jesse Vig, Marti A. Hearst, Caiming Xiong, Chien-Sheng Wu

Conversational interfaces powered by Large Language Models (LLMs) have recently become a popular way to obtain feedback during document editing.

Art or Artifice? Large Language Models and the False Promise of Creativity

no code implementations25 Sep 2023 Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, Chien-Sheng Wu

Inspired by the Torrance Test of Creative Thinking (TTCT), which measures creativity as a process, we use the Consensual Assessment Technique [3] and propose the Torrance Test of Creative Writing (TTCW) to evaluate creativity as a product.

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

2k 8k

Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learning

1 code implementation1 Jun 2023 Fan Yin, Jesse Vig, Philippe Laban, Shafiq Joty, Caiming Xiong, Chien-Sheng Jason Wu

Large language models (LLMs) have shown impressive performance in following natural language instructions to solve unseen tasks.

SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages

1 code implementation30 May 2023 Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu

Text simplification research has mostly focused on sentence-level simplification, even though many desirable edits - such as adding relevant background information or reordering content - may require document-level context.

Sentence Text Simplification

LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

1 code implementation23 May 2023 Philippe Laban, Wojciech Kryściński, Divyansh Agarwal, Alexander R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu

To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SummEdits.

Misinformation

Designing and Evaluating Interfaces that Highlight News Coverage Diversity Using Discord Questions

no code implementations17 Feb 2023 Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs'ka, Xiang 'Anthony' Chen, Caiming Xiong

In a second usability study, we developed and implemented a reading exercise with 95 novice news readers to measure exposure to coverage diversity.

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

1 code implementation25 May 2022 Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kryściński, Justin F. Rousseau, Greg Durrett

We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.

Abstractive Text Summarization

Near-Negative Distinction: Giving a Second Life to Human Evaluation Datasets

1 code implementation13 May 2022 Philippe Laban, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong

Precisely assessing the progress in natural language generation (NLG) tasks is challenging, and human evaluation to establish a preference in a model's output over another is often necessary.

nlg evaluation Question Answering +3

NewsPod: Automatic and Interactive News Podcasts

no code implementations15 Feb 2022 Philippe Laban, Elicia Ye, Srujay Korlakunta, John Canny, Marti A. Hearst

News podcasts are a popular medium to stay informed and dive deep into news topics.

SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

2 code implementations18 Nov 2021 Philippe Laban, Tobias Schnabel, Paul N. Bennett, Marti A. Hearst

In this work, we revisit the use of NLI for inconsistency detection, finding that past work suffered from a mismatch in input granularity between NLI datasets (sentence-level), and inconsistency detection (document level).

Natural Language Inference Sentence

Keep it Simple: Unsupervised Simplification of Multi-Paragraph Text

1 code implementation ACL 2021 Philippe Laban, Tobias Schnabel, Paul Bennett, Marti A. Hearst

This work presents Keep it Simple (KiS), a new approach to unsupervised text simplification which learns to balance a reward across three properties: fluency, salience and simplicity.

Reading Comprehension Text Simplification

Can Transformer Models Measure Coherence In Text? Re-Thinking the Shuffle Test

1 code implementation ACL 2021 Philippe Laban, Luke Dai, Lucas Bandarkar, Marti A. Hearst

The Shuffle Test is the most common task to evaluate whether NLP models can measure coherence in text.

News Headline Grouping as a Challenging NLU Task

1 code implementation NAACL 2021 Philippe Laban, Lucas Bandarkar, Marti A. Hearst

Recent progress in Natural Language Understanding (NLU) has seen the latest models outperform human performance on many standard tasks.

Natural Language Understanding

What's The Latest? A Question-driven News Chatbot

no code implementations ACL 2020 Philippe Laban, John Canny, Marti A. Hearst

This work describes an automatic news chatbot that draws content from a diverse set of news articles and creates conversations with a user about the news.

Chatbot

The Summary Loop: Learning to Write Abstractive Summaries Without Examples

1 code implementation ACL 2020 Philippe Laban, Andrew Hsi, John Canny, Marti A. Hearst

This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint.

Abstractive Text Summarization News Summarization

newsLens: building and visualizing long-ranging news stories

no code implementations WS 2017 Philippe Laban, Marti Hearst

We propose a method to aggregate and organize a large, multi-source dataset of news articles into a collection of major stories, and automatically name and visualize these stories in a working system.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.