1 code implementation • 28 Oct 2024 • Yen-Shan Chen, Jing Jin, Peng-Ting Kuo, Chao-Wei Huang, Yun-Nung Chen
Recent studies have demonstrated that large language models (LLMs) exhibit significant biases in evaluation tasks, particularly in preferentially rating and favoring self-generated content.
1 code implementation • 2 Oct 2024 • Chao-Wei Huang, Yun-Nung Chen
Effective information retrieval (IR) from vast datasets relies on advanced techniques to extract relevant information in response to queries.
1 code implementation • 2 Oct 2024 • Chao-Wei Huang, Yun-Nung Chen
In this paper, we address this gap by proposing FactAlign, a novel alignment framework designed to enhance the factuality of LLMs' long-form responses while maintaining their helpfulness.
no code implementations • 3 Jul 2024 • Chao-Wei Huang, Hui Lu, Hongyu Gong, Hirofumi Inaguma, Ilia Kulikov, Ruslan Mavlyutov, Sravya Popuri
Large language models (LLMs), known for their exceptional reasoning capabilities, generalizability, and fluency across diverse domains, present a promising avenue for enhancing speech-related tasks.
1 code implementation • 3 Jun 2024 • Yu-Min Tseng, Yu-Chao Huang, Teng-Yun Hsiao, Wei-Lin Chen, Chao-Wei Huang, Yu Meng, Yun-Nung Chen
The concept of persona, originally adopted in dialogue literature, has re-surged as a promising framework for tailoring large language models (LLMs) to specific context (e. g., personalized search, LLM-as-a-judge).
1 code implementation • 3 Jun 2024 • Cheng-Hsun Hsueh, Paul Kuo-Ming Huang, Tzu-Han Lin, Che-Wei Liao, Hung-Chieh Fang, Chao-Wei Huang, Yun-Nung Chen
Knowledge editing is a rising technique for efficiently updating factual knowledge in large language models (LLMs) with minimal alteration of parameters.
no code implementations • 3 Jun 2024 • Tzu-Lin Kuo, Tzu-Wei Chiu, Tzung-Sheng Lin, Sheng-Yang Wu, Chao-Wei Huang, Yun-Nung Chen
By examining state-of-the-art GR techniques and their applications, this survey aims to provide a foundational understanding of GR and inspire further innovations in this transformative approach to information retrieval.
1 code implementation • 25 Mar 2024 • Chao-Wei Huang, Yun-Nung Chen
This paper introduces InstUPR, an unsupervised passage reranking method based on large language models (LLMs).
1 code implementation • 6 Mar 2024 • Chao-Wei Huang, Chen-An Li, Tsu-Yuan Hsu, Chen-Yu Hsu, Yun-Nung Chen
Dense retrieval methods have demonstrated promising performance in multilingual information retrieval, where queries and documents can be in different languages.
1 code implementation • 13 Sep 2023 • Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, Chen-An Li, Yun-Nung Chen
Conversational search provides a natural interface for information retrieval (IR).
1 code implementation • 17 Nov 2022 • Hung-Chieh Fang, Kuo-Han Hung, Chao-Wei Huang, Yun-Nung Chen
Open-domain conversational question answering can be viewed as two tasks: passage retrieval and conversational question answering, where the former relies on selecting candidate passages from a large corpus and the latter requires better understanding of a question with contexts to predict the answers.
1 code implementation • SIGDIAL (ACL) 2022 • Chun-Mao Lai, Ming-Hao Hsu, Chao-Wei Huang, Yun-Nung Chen
Prior work has demonstrated that data augmentation is useful for improving dialogue state tracking.
2 code implementations • NAACL (ClinicalNLP) 2022 • Chao-Wei Huang, Shang-Chi Tsai, Yun-Nung Chen
Prior work has shown that pretrained language models underperformed on this task with the regular finetuning scheme.
no code implementations • 16 May 2022 • Yen-Ting Lin, Hui-Chi Kuo, Ze-Song Xu, Ssu Chiu, Chieh-Chi Hung, Yi-Cheng Chen, Chao-Wei Huang, Yun-Nung Chen
This paper introduces Miutsu, National Taiwan University's Alexa Prize TaskBot, which is designed to assist users in completing tasks requiring multiple steps and decisions in two different domains -- home improvement and cooking.
no code implementations • 25 Apr 2022 • Chao-Wei Huang, Kai-Chou Yang, Zi-Yuan Chen, Hao-Chien Cheng, Po-Yu Wu, Yu-Yang Huang, Chung-Kai Hsieh, Geng-Zhi Wildsky Fann, Ting-Yin Cheng, Ethan Tu, Yun-Nung Chen
With thousands of news articles from hundreds of sources distributed and shared every day, news consumption and information acquisition have been increasingly difficult for readers.
1 code implementation • NAACL 2021 • Shang-Chi Tsai, Chao-Wei Huang, Yun-Nung Chen
To address this problem, we propose a two-stage framework to improve automatic ICD coding by capturing the label correlation.
1 code implementation • 22 Jan 2021 • Seokhwan Kim, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tur
This challenge track aims to expand the coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
no code implementations • 12 Nov 2020 • Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, Rajen Subba
Interactive evaluation of dialog, and 4.
1 code implementation • 2 Nov 2020 • Chao-Wei Huang, Yun-Nung Chen
It is shown that encoding lattices as opposed to 1-best results generated by automatic speech recognizer (ASR) boosts the performance of spoken language understanding (SLU).
2 code implementations • ACL 2020 • Chao-Wei Huang, Yun-Nung Chen
Pre-trained language models have achieved huge improvement on many NLP tasks.
1 code implementation • ACL 2020 • Shang-Yu Su, Chao-Wei Huang, Yun-Nung Chen
The prior work is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework.
1 code implementation • 24 Sep 2019 • Chao-Wei Huang, Yun-Nung Chen
Employing pre-trained language models (LM) to extract contextualized word representations has achieved state-of-the-art performance on various NLP tasks.
2 code implementations • ACL 2019 • Shang-Yu Su, Chao-Wei Huang, Yun-Nung Chen
Natural language understanding (NLU) and natural language generation (NLG) are both critical research topics in the NLP field.
no code implementations • 21 Mar 2019 • Ting-Rui Chiang, Chao-Wei Huang, Shang-Yu Su, Yun-Nung Chen
With the increasing research interest in dialogue response generation, there is an emerging branch formulating this task as selecting next sentences, where given the partial dialogue contexts, the goal is to determine the most probable next sentence.
no code implementations • 21 Mar 2019 • Chao-Wei Huang, Ting-Rui Chiang, Shang-Yu Su, Yun-Nung Chen
The response selection has been an emerging research topic due to the growing interest in dialogue modeling, where the goal of the task is to select an appropriate response for continuing dialogues.