no code implementations • COLING 2022 • Canasai Kruengkrai, Junichi Yamagishi
Elastic weight consolidation (EWC, Kirkpatrick et al. 2017) is a promising approach to addressing catastrophic forgetting in sequential training.
1 code implementation • 26 Mar 2024 • Shirin Dabbaghi Varnosfaderani, Canasai Kruengkrai, Ramin Yahyapour, Junichi Yamagishi
FEVEROUS is a benchmark and research initiative focused on fact extraction and verification tasks involving unstructured text and structured tabular data.
1 code implementation • 25 Oct 2023 • Yi-Chen Chang, Canasai Kruengkrai, Junichi Yamagishi
Experimental results show that the multilingual language model can be used to build fact verification models in different languages efficiently.
no code implementations • 27 Oct 2022 • Li-Kuang Chen, Canasai Kruengkrai, Junichi Yamagishi
Methods addressing spurious correlations such as Just Train Twice (JTT, arXiv:2107. 09044v2) involve reweighting a subset of the training set to maximize the worst-group accuracy.
2 code implementations • Findings (ACL) 2021 • Canasai Kruengkrai, Junichi Yamagishi, Xin Wang
Evidence-based fact checking aims to verify the truthfulness of a claim against evidence extracted from textual sources.
no code implementations • EMNLP 2020 • Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, Chunyan Miao
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
no code implementations • ACL 2020 • Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied, Lidong Bing
Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate.
no code implementations • IJCNLP 2019 • Canasai Kruengkrai
Flipping sentiment while preserving sentence meaning is challenging because parallel sentences with the same content but different sentiment polarities are not always available for model learning.
no code implementations • ACL 2019 • Canasai Kruengkrai
We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging.