no code implementations • 21 Dec 2023 • Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
Typhoon is a series of Thai large language models (LLMs) developed specifically for the Thai language.
1 code implementation • 10 Sep 2023 • Adian Liusie, Potsawee Manakul, Mark J. F. Gales
To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers.
1 code implementation • 15 Jul 2023 • Adian Liusie, Potsawee Manakul, Mark J. F. Gales
Current developments in large language models (LLMs) have enabled impressive zero-shot capabilities across various natural language tasks.
no code implementations • 9 Jul 2023 • Rao Ma, Mengjie Qian, Potsawee Manakul, Mark Gales, Kate Knill
In this paper we investigate using ChatGPT, a generative LLM, for ASR error correction.
1 code implementation • 8 Jun 2023 • Potsawee Manakul, Yassir Fathullah, Adian Liusie, Vyas Raina, Vatsal Raina, Mark Gales
In this paper, we consider the challenge of summarizing patients' medical progress notes in a limited data setting.
3 code implementations • 15 Mar 2023 • Potsawee Manakul, Adian Liusie, Mark J. F. Gales
In this work, we propose "SelfCheckGPT", a simple sampling-based approach that can be used to fact-check the responses of black-box models in a zero-resource fashion, i. e. without an external database.
2 code implementations • 28 Jan 2023 • Potsawee Manakul, Adian Liusie, Mark J. F. Gales
In this work, we introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.
1 code implementation • 28 Aug 2022 • Potsawee Manakul, Mark J. F. Gales
The podcast summary assessment data is available.
no code implementations • EMNLP 2021 • Potsawee Manakul, Mark J. F. Gales
Second, we propose a modified architecture that selects the subset of sentences to constrain the encoder-decoder attention.
1 code implementation • ACL 2021 • Potsawee Manakul, Mark J. F. Gales
Transformer-based models have achieved state-of-the-art results in a wide range of natural language processing (NLP) tasks including document summarization.
1 code implementation • 2 Apr 2021 • Qingyun Dou, Yiting Lu, Potsawee Manakul, Xixin Wu, Mark J. F. Gales
This approach guides the model with the generated output history and reference attention, and can reduce the training-inference mismatch without a schedule or a classifier.
no code implementations • 4 Dec 2020 • Potsawee Manakul, Mark Gales
Our approach consists of two steps: (1) Filtering redundant or less informative sentences in the transcription using the attention of a hierarchical model; (2) Applying a state-of-the-art text summarisation system (BART) fine-tuned on the Podcast data using a sequence-level reward function.