1 code implementation • EMNLP 2021 • Po-Nien Kung, Sheng-Siang Yin, Yi-Cheng Chen, Tse-Hsuan Yang, Yun-Nung Chen
Multi-task auxiliary learning utilizes a set of relevant auxiliary tasks to improve the performance of a primary task.
1 code implementation • 19 Jun 2024 • Honghua Zhang, Po-Nien Kung, Masahiro Yoshida, Guy Van Den Broeck, Nanyun Peng
Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation at inference time poses a persistent challenge.
no code implementations • 7 Apr 2024 • Hritik Bansal, Po-Nien Kung, P. Jeffrey Brantingham, Kai-Wei Chang, Nanyun Peng
In this paper, we propose GenEARL, a training-free generative framework that harness the power of the modern generative models to understand event task descriptions given image contexts to perform the EARL task.
no code implementations • 5 Mar 2024 • Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng
We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.
1 code implementation • 1 Nov 2023 • Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
no code implementations • 4 Oct 2023 • Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights, including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information.
no code implementations • 24 May 2023 • Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, Wei Wang
Information extraction tasks such as event extraction require an in-depth understanding of the output structure and sub-task dependencies.
no code implementations • 19 May 2023 • Po-Nien Kung, Nanyun Peng
Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples.
no code implementations • 11 Oct 2021 • Po-Nien Kung, Chung-Cheng Chang, Tse-Hsuan Yang, Hsin-Kai Hsu, Yu-Jia Liou, Yun-Nung Chen
Task-oriented dialogue systems have been a promising area in the NLP field.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Po-Nien Kung, Tse-Hsuan Yang, Yi-Cheng Chen, Sheng-Siang Yin, Yun-Nung Chen
Extracting rationales can help human understand which information the model utilizes and how it makes the prediction towards better interpretability.