Search Results for author: Jian Gang Ngui

Found 2 papers, 1 papers with code

Do pretrained transformers infer telicity like humans?

no code implementations CoNLL (EMNLP) 2021 Yiyun Zhao, Jian Gang Ngui, Lucy Hall Hartley, Steven Bethard

Pretrained transformer-based language models achieve state-of-the-art performance in many NLP tasks, but it is an open question whether the knowledge acquired by the models during pretraining resembles the linguistic knowledge of humans.

Open-Ended Question Answering

BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation Suite for Large Language Models

2 code implementations12 Sep 2023 Wei Qi Leong, Jian Gang Ngui, Yosephine Susanto, Hamsawardhini Rengarajan, Kengatharaiyer Sarveswaran, William Chandra Tjhi

As GPT-4 is purportedly one of the best-performing multilingual LLMs at the moment, we use it as a yardstick to gauge the capabilities of LLMs in the context of SEA languages.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.