1 code implementation • COLING 2022 • Wei-Lin Chen, An-Zi Yen, Hen-Hsen Huang, Hsin-Hsi Chen
Explaining the reasoning of neural models has attracted attention in recent years.
no code implementations • 31 Jul 2024 • Oscar Sainz, Iker García-Ferrero, Alon Jacovi, Jon Ander Campos, Yanai Elazar, Eneko Agirre, Yoav Goldberg, Wei-Lin Chen, Jenny Chim, Leshem Choshen, Luca D'Amico-Wong, Melissa Dell, Run-Ze Fan, Shahriar Golchin, Yucheng Li, PengFei Liu, Bhavish Pahwa, Ameya Prabhu, Suryansh Sharma, Emily Silcock, Kateryna Solonko, David Stap, Mihai Surdeanu, Yu-Min Tseng, Vishaal Udandarao, Zengzhi Wang, Ruijie Xu, Jinglin Yang
The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models.
1 code implementation • 19 Jun 2024 • Zhepei Wei, Wei-Lin Chen, Yu Meng
Retrieval-augmented generation (RAG) has shown promising potential to enhance the accuracy and factuality of language models (LMs).
1 code implementation • 3 Jun 2024 • Yu-Min Tseng, Yu-Chao Huang, Teng-Yun Hsiao, Wei-Lin Chen, Chao-Wei Huang, Yu Meng, Yun-Nung Chen
The concept of persona, originally adopted in dialogue literature, has re-surged as a promising framework for tailoring large language models (LLMs) to specific context (e. g., personalized search, LLM-as-a-judge).
2 code implementations • 29 Mar 2024 • Po-Heng Chen, Sijia Cheng, Wei-Lin Chen, Yen-Ting Lin, Yun-Nung Chen
We present TMLU, a holistic evaluation suit tailored for assessing the advanced knowledge and reasoning capability in LLMs, under the context of Taiwanese Mandarin.
1 code implementation • 23 Oct 2023 • Wei-Lin Chen, Cheng-Kuang Wu, Hsin-Hsi Chen, Chung-Chi Chen
In this paper, we address the hallucination problem commonly found in natural language generation tasks.
1 code implementation • 18 Jul 2023 • Cheng-Kuang Wu, Wei-Lin Chen, Hsin-Hsi Chen
We explore the extension of chain-of-thought (CoT) prompting to medical reasoning for the task of automatic diagnosis.
1 code implementation • 24 May 2023 • Wei-Lin Chen, Cheng-Kuang Wu, Yun-Nung Chen, Hsin-Hsi Chen
Finally, we perform ICL for the test input with the pseudo-input-label pairs as demonstrations.
1 code implementation • 12 May 2023 • Wei-Lin Chen, An-Zi Yen, Cheng-Kuang Wu, Hen-Hsen Huang, Hsin-Hsi Chen
Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference.