Search Results for author: Piyawat Lertvittayakumjorn

Found 16 papers, 8 papers with code

HILDIF: Interactive Debugging of NLI Models Using Influence Functions

no code implementations ACL (InterNLP) 2021 Hugo Zylberajch, Piyawat Lertvittayakumjorn, Francesca Toni

Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability.

Natural Language Inference

Label-Aware Automatic Verbalizer for Few-Shot Text Classification

no code implementations19 Oct 2023 Thanakorn Thaminkaew, Piyawat Lertvittayakumjorn, Peerapon Vateekul

Specifically, we use the manual labels along with the conjunction "and" to induce the model to generate more effective words for the verbalizer.

Few-Shot Text Classification Language Modelling +1

Towards Explainable Evaluation Metrics for Machine Translation

no code implementations22 Jun 2023 Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4.

Machine Translation Translation

Argumentative Explanations for Pattern-Based Text Classifiers

no code implementations22 May 2022 Piyawat Lertvittayakumjorn, Francesca Toni

Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features.

Binary text classification regression +3

Towards Explainable Evaluation Metrics for Natural Language Generation

1 code implementation21 Mar 2022 Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

We also provide a synthesizing overview over recent approaches for explainable machine translation metrics and discuss how they relate to those goals and properties.

Machine Translation Text Generation +2

Rational LAMOL: A Rationale-based Lifelong Learning Framework

1 code implementation ACL 2021 Kasidis Kanwatchara, Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Boonserm Kijsirikul, Peerapon Vateekul

Lifelong learning (LL) aims to train a neural network on a stream of tasks while retaining knowledge from previous tasks.

Supporting Complaints Investigation for Nursing and Midwifery Regulatory Agencies

no code implementations ACL 2021 Piyawat Lertvittayakumjorn, Ivan Petej, Yang Gao, Yamuna Krishnamurthy, Anna Van Der Gaag, Robert Jago, Kostas Stathis

Health professional regulators aim to protect the health and well-being of patients and the public by setting standards for scrutinising and overseeing the training and conduct of health and care professionals.

Decision Making

ESRA: Explainable Scientific Research Assistant

no code implementations ACL 2021 Pollawat Hongwimol, Peeranuth Kehasukcharoen, Pasit Laohawarutchai, Piyawat Lertvittayakumjorn, Aik Beng Ng, Zhangsheng Lai, Timothy Liu, Peerapon Vateekul

We introduce Explainable Scientific Research Assistant (ESRA), a literature discovery platform that augments search results with relevant details and explanations, aiding users in understanding more about their queries and the returned papers beyond existing literature search systems.

Explanation-Based Human Debugging of NLP Models: A Survey

no code implementations30 Apr 2021 Piyawat Lertvittayakumjorn, Francesca Toni

Debugging a machine learning model is hard since the bug usually involves the training data and the learning process.

Deep Argumentative Explanations

no code implementations10 Dec 2020 Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni

Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).

Explainable Artificial Intelligence (XAI) Text Classification

FIND: Human-in-the-Loop Debugging Deep Text Classifiers

1 code implementation EMNLP 2020 Piyawat Lertvittayakumjorn, Lucia Specia, Francesca Toni

Since obtaining a perfect training dataset (i. e., a dataset which is considerably large, unbiased, and well-representative of unseen cases) is hardly possible, many real-world text classifiers are trained on the available, yet imperfect, datasets.

Human-grounded Evaluations of Explanation Methods for Text Classification

1 code implementation IJCNLP 2019 Piyawat Lertvittayakumjorn, Francesca Toni

Due to the black-box nature of deep learning models, methods for explaining the models' results are crucial to gain trust from humans and support collaboration between AIs and humans.

General Classification text-classification +1

Integrating Semantic Knowledge to Tackle Zero-shot Text Classification

2 code implementations NAACL 2019 Jingqing Zhang, Piyawat Lertvittayakumjorn, Yike Guo

Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification.

Data Augmentation General Classification +5

Cannot find the paper you are looking for? You can Submit a new open access paper.