1 code implementation • 29 Jul 2024 • Wenxuan Zhang, Hou Pong Chan, Yiran Zhao, Mahani Aljunied, Jianyu Wang, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu, Yew Ken Chia, Xin Li, Lidong Bing
Large Language Models (LLMs) have shown remarkable abilities across various tasks, yet their development has predominantly centered on high-resource languages like English and Chinese, leaving low-resource languages underserved.
no code implementations • 28 Jun 2024 • Zhe Hu, Hou Pong Chan, Jing Li, Yu Yin
The results show that our framework can generate more diverse and persuasive arguments through both automatic and human evaluations.
1 code implementation • 18 Mar 2024 • Kung-Hsiang Huang, Hou Pong Chan, Yi R. Fung, Haoyi Qiu, Mingyang Zhou, Shafiq Joty, Shih-Fu Chang, Heng Ji
This survey paper serves as a comprehensive resource for researchers and practitioners in the fields of natural language processing, computer vision, and data analysis, providing valuable insights and directions for future research in chart understanding leveraging large foundation models.
no code implementations • 16 Feb 2024 • Chenkai Sun, Ke Yang, Revanth Gangi Reddy, Yi R. Fung, Hou Pong Chan, Kevin Small, ChengXiang Zhai, Heng Ji
The increasing demand for personalized interactions with large language models (LLMs) calls for methodologies capable of accurately and efficiently identifying user opinions and preferences.
no code implementations • 12 Feb 2024 • Kyungha Kim, Sangyun Lee, Kung-Hsiang Huang, Hou Pong Chan, Manling Li, Heng Ji
Fact-checking research has extensively explored verification but less so the generation of natural-language explanations, crucial for user trust.
2 code implementations • 15 Dec 2023 • Kung-Hsiang Huang, Mingyang Zhou, Hou Pong Chan, Yi R. Fung, Zhenhailong Wang, Lingyu Zhang, Shih-Fu Chang, Heng Ji
This work inaugurates a new domain in factual error correction for chart captions, presenting a novel evaluation mechanism, and demonstrating an effective approach to ensuring the factuality of generated chart captions.
Factual Inconsistency Detection in Chart Captioning Image Captioning +1
1 code implementation • 31 Oct 2023 • Zhe Hu, Hou Pong Chan, Yu Yin
Argument generation is a challenging task in natural language processing, which requires rigorous reasoning and proper content organization.
1 code implementation • 20 Oct 2023 • Chenkai Sun, Jinning Li, Yi R. Fung, Hou Pong Chan, Tarek Abdelzaher, ChengXiang Zhai, Heng Ji
Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury.
1 code implementation • 25 May 2023 • Chenkai Sun, Jinning Li, Hou Pong Chan, ChengXiang Zhai, Heng Ji
Our analysis shows that the best-performing models are capable of predicting responses that are consistent with the personas, and as a byproduct, the task formulation also enables many interesting applications in the analysis of social network groups and their opinions, such as the discovery of extreme opinion groups.
1 code implementation • 24 May 2023 • Qi Zeng, Mankeerat Sidhu, Ansel Blume, Hou Pong Chan, Lu Wang, Heng Ji
To address this gap, we propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
1 code implementation • 23 May 2023 • Hou Pong Chan, Qi Zeng, Heng Ji
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact, which explicitly represents the facts in the documents and summaries with semantic frames extracted by semantic role labeling, and highlights the related semantic frames to predict inconsistency.
no code implementations • 23 May 2023 • Kung-Hsiang Huang, Hou Pong Chan, Kathleen McKeown, Heng Ji
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
1 code implementation • 13 May 2023 • Kung-Hsiang Huang, Hou Pong Chan, Heng Ji
Faithfully correcting factual errors is critical for maintaining the integrity of textual knowledge bases and preventing hallucinations in sequence-to-sequence models.
1 code implementation • 3 May 2023 • Chi Seng Cheang, Hou Pong Chan, Derek F. Wong, Xuebo Liu, Zhaocong Li, Yanming Sun, Shudong Liu, Lidia S. Chao
Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data.
1 code implementation • 10 Feb 2023 • Susik Yoon, Hou Pong Chan, Jiawei Han
Summarizing text-rich documents has been long studied in the literature, but most of the existing efforts have been made to summarize a static and predefined multi-document set.
1 code implementation • 2 Dec 2022 • Revanth Gangi Reddy, Heba Elfardy, Hou Pong Chan, Kevin Small, Heng Ji
A primary objective of news articles is to establish the factual record for an event, frequently achieved by conveying both the details of the specified event (i. e., the 5 Ws; Who, What, Where, When and Why regarding the event) and how people reacted to it (i. e., reported statements).
1 code implementation • 26 Oct 2022 • Zhe Hu, Hou Pong Chan, Lifu Huang
Teaching neural models to generate narrative coherent texts is a critical problem.
1 code implementation • 25 Aug 2022 • Qingyun Wang, Manling Li, Hou Pong Chan, Lifu Huang, Julia Hockenmaier, Girish Chowdhary, Heng Ji
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities.
no code implementations • ACL 2022 • Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang
Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.
no code implementations • 14 Mar 2022 • Hou Pong Chan, Mingxi Guo, Cheng-Zhong Xu
In this work, we study the problem of language grounding for autonomous vehicles, which aims to localize a region in a visual scene according to a natural language command from a passenger.
no code implementations • 14 Sep 2021 • Zhe Hu, Zhiwei Cao, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Jinsong Su, Hua Wu
Controllable text generation is an appealing but challenging task, which allows users to specify particular attributes of the generated outputs.
1 code implementation • 7 Aug 2021 • Hou Pong Chan, Lu Wang, Irwin King
We study controllable text summarization which allows users to gain control on a particular attribute (e. g., length limit) of the generated summaries.
1 code implementation • 3 Aug 2021 • Wang Chen, Piji Li, Hou Pong Chan, Irwin King
The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.
1 code implementation • 19 Jun 2021 • Hou Pong Chan, Irwin King
This framework first selects salient sentences and then independently condenses each of the selected sentences into a concise version.
1 code implementation • 2 Jun 2020 • Hou Pong Chan, Wang Chen, Irwin King
Review summarization aims at generating a concise summary that describes the key opinions and sentiment of a review, while sentiment classification aims to predict a sentiment label indicating the sentiment attitude of a review.
1 code implementation • ACL 2020 • Wang Chen, Hou Pong Chan, Piji Li, Irwin King
A new setting is recently introduced into this problem, in which, given a document, the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produce.
2 code implementations • ACL 2019 • Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R. Lyu, Shuming Shi
Further discussions show that our model learns meaningful topics, which interprets its superiority in social media keyphrase generation.
1 code implementation • ACL 2019 • Hou Pong Chan, Wang Chen, Lu Wang, Irwin King
To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to generate both sufficient and accurate keyphrases.
1 code implementation • NAACL 2019 • Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, Irwin King
For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.
no code implementations • EMNLP 2018 • Hou Pong Chan, Irwin King
This task has been formulated as a reinforcement learning problem, in which the reward of the agent is the sum of positive responses received by the recommended comments.