Search Results for author: Potsawee Manakul

Found 12 papers, 8 papers with code

Mitigating Word Bias in Zero-shot Prompt-based Classifiers

1 code implementation10 Sep 2023 Adian Liusie, Potsawee Manakul, Mark J. F. Gales

To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers.

Zero-Shot Learning

LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models

1 code implementation15 Jul 2023 Adian Liusie, Potsawee Manakul, Mark J. F. Gales

Current developments in large language models (LLMs) have enabled impressive zero-shot capabilities across various natural language tasks.

nlg evaluation Response Generation +1

CUED at ProbSum 2023: Hierarchical Ensemble of Summarization Models

1 code implementation8 Jun 2023 Potsawee Manakul, Yassir Fathullah, Adian Liusie, Vyas Raina, Vatsal Raina, Mark Gales

In this paper, we consider the challenge of summarizing patients' medical progress notes in a limited data setting.

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

3 code implementations15 Mar 2023 Potsawee Manakul, Adian Liusie, Mark J. F. Gales

In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i. e. without an external database.

Fact Checking Hallucination +1

MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization

2 code implementations28 Jan 2023 Potsawee Manakul, Adian Liusie, Mark J. F. Gales

In this work, we introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.

Hallucination Multiple-choice +1

Long-Span Summarization via Local Attention and Content Selection

1 code implementation ACL 2021 Potsawee Manakul, Mark J. F. Gales

Transformer-based models have achieved state-of-the-art results in a wide range of natural language processing (NLP) tasks including document summarization.

Abstractive Text Summarization Document Summarization

Attention Forcing for Machine Translation

1 code implementation2 Apr 2021 Qingyun Dou, Yiting Lu, Potsawee Manakul, Xixin Wu, Mark J. F. Gales

This approach guides the model with the generated output history and reference attention, and can reduce the training-inference mismatch without a schedule or a classifier.

Machine Translation NMT +1

CUED_speech at TREC 2020 Podcast Summarisation Track

no code implementations4 Dec 2020 Potsawee Manakul, Mark Gales

Our approach consists of two steps: (1) Filtering redundant or less informative sentences in the transcription using the attention of a hierarchical model; (2) Applying a state-of-the-art text summarisation system (BART) fine-tuned on the Podcast data using a sequence-level reward function.

Cannot find the paper you are looking for? You can Submit a new open access paper.