Search Results for author: Po-Nien Kung

Found 9 papers, 3 papers with code

GenEARL: A Training-Free Generative Framework for Multimodal Event Argument Role Labeling

no code implementations7 Apr 2024 Hritik Bansal, Po-Nien Kung, P. Jeffrey Brantingham, Kai-Wei Chang, Nanyun Peng

In this paper, we propose GenEARL, a training-free generative framework that harness the power of the modern generative models to understand event task descriptions given image contexts to perform the EARL task.

Language Modelling Large Language Model +1

Improving Event Definition Following For Zero-Shot Event Detection

no code implementations5 Mar 2024 Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng

We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.

Event Detection Event Extraction

Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks

1 code implementation1 Nov 2023 Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng

Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.

Informativeness Out-of-Distribution Generalization +1

MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways

no code implementations4 Oct 2023 Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang

We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights, including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information.

Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning

no code implementations19 May 2023 Po-Nien Kung, Nanyun Peng

Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples.

Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.