no code implementations • ACL (InterNLP) 2021 • Hugo Zylberajch, Piyawat Lertvittayakumjorn, Francesca Toni
Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability.
no code implementations • 19 Oct 2023 • Thanakorn Thaminkaew, Piyawat Lertvittayakumjorn, Peerapon Vateekul
Specifically, we use the manual labels along with the conjunction "and" to induce the model to generate more effective words for the verbalizer.
no code implementations • 22 Jun 2023 • Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger
In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4.
no code implementations • 22 May 2022 • Piyawat Lertvittayakumjorn, Francesca Toni
Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features.
1 code implementation • 21 Mar 2022 • Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger
We also provide a synthesizing overview over recent approaches for explainable machine translation metrics and discuss how they relate to those goals and properties.
1 code implementation • EMNLP (Eval4NLP) 2021 • Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, Yang Gao
In this paper, we introduce the Eval4NLP-2021shared task on explainable quality estimation.
1 code implementation • ACL 2021 • Kasidis Kanwatchara, Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Boonserm Kijsirikul, Peerapon Vateekul
Lifelong learning (LL) aims to train a neural network on a stream of tasks while retaining knowledge from previous tasks.
no code implementations • ACL 2021 • Piyawat Lertvittayakumjorn, Ivan Petej, Yang Gao, Yamuna Krishnamurthy, Anna Van Der Gaag, Robert Jago, Kostas Stathis
Health professional regulators aim to protect the health and well-being of patients and the public by setting standards for scrutinising and overseeing the training and conduct of health and care professionals.
no code implementations • ACL 2021 • Pollawat Hongwimol, Peeranuth Kehasukcharoen, Pasit Laohawarutchai, Piyawat Lertvittayakumjorn, Aik Beng Ng, Zhangsheng Lai, Timothy Liu, Peerapon Vateekul
We introduce Explainable Scientific Research Assistant (ESRA), a literature discovery platform that augments search results with relevant details and explanations, aiding users in understanding more about their queries and the returned papers beyond existing literature search systems.
1 code implementation • NAACL 2021 • Piyawat Lertvittayakumjorn, Daniele Bonadiman, Saab Mansour
Practically, some combinations of slot values can be invalid according to external knowledge.
no code implementations • 30 Apr 2021 • Piyawat Lertvittayakumjorn, Francesca Toni
Debugging a machine learning model is hard since the bug usually involves the training data and the learning process.
1 code implementation • LREC 2022 • Piyawat Lertvittayakumjorn, Leshem Choshen, Eyal Shnarch, Francesca Toni
Data exploration is an important step of every data science and machine learning project, including those involving textual data.
no code implementations • 10 Dec 2020 • Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, Francesca Toni
Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs).
Explainable Artificial Intelligence (XAI) Text Classification
1 code implementation • EMNLP 2020 • Piyawat Lertvittayakumjorn, Lucia Specia, Francesca Toni
Since obtaining a perfect training dataset (i. e., a dataset which is considerably large, unbiased, and well-representative of unseen cases) is hardly possible, many real-world text classifiers are trained on the available, yet imperfect, datasets.
1 code implementation • IJCNLP 2019 • Piyawat Lertvittayakumjorn, Francesca Toni
Due to the black-box nature of deep learning models, methods for explaining the models' results are crucial to gain trust from humans and support collaboration between AIs and humans.
2 code implementations • NAACL 2019 • Jingqing Zhang, Piyawat Lertvittayakumjorn, Yike Guo
Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification.