no code implementations • SemEval (NAACL) 2022 • Thanet Markchom, HuiZhi Liang, Jiaoyan Chen
To tackle this task, this work proposes to fine-tune different BERT-based models pre-trained on different languages.
no code implementations • 22 Jan 2024 • Feng Xiong, Thanet Markchom, Ziwei Zheng, Subin Jung, Varun Ojha, HuiZhi Liang
The task comprises three subtasks: binary classification in monolingual and multilingual (Subtask A), multi-class classification (Subtask B), and mixed text detection (Subtask C).
no code implementations • SEMEVAL 2021 • Thanet Markchom, HuiZhi Liang
It shows that the pre-trained BERT token embeddings can be used as additional knowledge for understanding abstract meanings in question answering.
no code implementations • SEMEVAL 2021 • Emmanuel Osei-Brefo, Thanet Markchom, HuiZhi Liang
In this work, two approaches dealing with the noise and errors in crowd-sourced labels are proposed.
no code implementations • SEMEVAL 2020 • Thanet Markchom, Bhuvana Dhruva, Chandresh Pravin, HuiZhi Liang
SemEval Task 4 Commonsense Validation and Explanation Challenge is to validate whether a system can differentiate natural language statements that make sense from those that do not make sense.