no code implementations • EMNLP 2021 • Huimin Wang, Kam-Fai Wong
Most reinforcement learning methods for dialog policy learning train a centralized agent that selects a predefined joint action concatenating domain name, intent type, and slot name.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 30 May 2025 • Bojia Zi, Weixuan Peng, Xianbiao Qi, Jianan Wang, Shihao Zhao, Rong Xiao, Kam-Fai Wong
Motivated by the observation that text condition is not best suited for this task, we simplify the pretrained video generation model by removing textual input and cross-attention layers, resulting in a more lightweight and efficient model architecture in the first stage.
no code implementations • 26 May 2025 • Yiming Du, Bingbing Wang, Yang He, Bin Liang, Baojun Wang, Zhongyang Li, Lin Gui, Jeff Z. Pan, Ruifeng Xu, Kam-Fai Wong
Based on this new dataset, we propose a Memory-Active Policy (MAP) that improves multi-session dialogue efficiency through a two-stage approach.
no code implementations • 20 May 2025 • Hongru Wang, Deng Cai, Wanjun Zhong, Shijue Huang, Jeff Z. Pan, Zeming Liu, Kam-Fai Wong
Inference-time scaling has attracted much attention which significantly enhance the performance of Large Language Models (LLMs) in complex reasoning tasks by increasing the length of Chain-of-Thought.
no code implementations • 19 May 2025 • Hongru Wang, WenYu Huang, YuFei Wang, Yuanhao Xi, Jianqiao Lu, huan zhang, Nan Hu, Zeming Liu, Jeff Z. Pan, Kam-Fai Wong
Existing benchmarks that assess Language Models (LMs) as Language Agents (LAs) for tool use primarily focus on stateless, single-turn interactions or partial evaluations, such as tool selection in a single turn, overlooking the inherent stateful nature of interactions in multi-turn applications.
1 code implementation • 1 May 2025 • Yiming Du, WenYu Huang, Danna Zheng, Zhaowei Wang, Sebastien Montella, Mirella Lapata, Kam-Fai Wong, Jeff Z. Pan
By reframing memory systems through the lens of atomic operations and representation types, this survey provides a structured and dynamic perspective on research, benchmark datasets, and tools related to memory in AI, clarifying the functional interplay in LLMs based agents while outlining promising directions for future research\footnote{The paper list, datasets, methods and tools are available at \href{https://github. com/Elvin-Yiming-Du/Survey_Memory_in_AI}{https://github. com/Elvin-Yiming-Du/Survey\_Memory\_in\_AI}.
no code implementations • 21 Apr 2025 • Hongru Wang, Cheng Qian, Wanjun Zhong, Xiusi Chen, Jiahao Qiu, Shijue Huang, Bowen Jin, Mengdi Wang, Kam-Fai Wong, Heng Ji
Tool-integrated reasoning (TIR) augments large language models (LLMs) with the ability to invoke external tools, such as search engines and code interpreters, to solve tasks beyond the capabilities of language-only reasoning.
no code implementations • 31 Mar 2025 • Zhengyi Zhao, Shubo Zhang, Bin Liang, Binyang Li, Kam-Fai Wong
In Biomedical Natural Language Processing (BioNLP) tasks, such as Relation Extraction, Named Entity Recognition, and Text Classification, the scarcity of high-quality data remains a significant challenge.
1 code implementation • 31 Mar 2025 • Rui Wang, Hongru Wang, Boyang Xue, Jianhui Pang, Shudong Liu, Yi Chen, Jiahao Qiu, Derek Fai Wong, Heng Ji, Kam-Fai Wong
Recent advancements in Large Language Models (LLMs) have significantly enhanced their ability to perform complex reasoning tasks, transitioning from fast and intuitive thinking (System 1) to slow and deep reasoning (System 2).
no code implementations • 29 Mar 2025 • Zhengyi Zhao, Shubo Zhang, Zezhong Wang, Bin Liang, Binyang Li, Kam-Fai Wong
Long-context question-answering (LCQA) systems have greatly benefited from the powerful reasoning capabilities of large language models (LLMs), which can be categorized into slow and quick reasoning modes.
no code implementations • 29 Mar 2025 • Zhengyi Zhao, Shubo Zhang, Yiming Du, Bin Liang, Baojun Wang, Zhongyang Li, Binyang Li, Kam-Fai Wong
Existing large language models (LLMs) have shown remarkable progress in dialogue systems.
no code implementations • 12 Mar 2025 • Boyang Xue, Qi Zhu, Hongru Wang, Rui Wang, Sheng Wang, Hongling Xu, Fei Mi, Yasheng Wang, Lifeng Shang, Qun Liu, Kam-Fai Wong
Present Large Language Models (LLM) self-training methods always under-sample on challenging queries, leading to inadequate learning on difficult problems which limits LLMs' ability.
1 code implementation • 20 Feb 2025 • Liang Chen, Li Shen, Yang Deng, Xiaoyan Zhao, Bin Liang, Kam-Fai Wong
The in-context learning (ICL) capability of large language models (LLMs) enables them to perform challenging tasks using provided demonstrations.
no code implementations • 10 Feb 2025 • Bojia Zi, Penghui Ruan, Marco Chen, Xianbiao Qi, Shaozhe Hao, Shihao Zhao, Youze Huang, Bin Liang, Rong Xiao, Kam-Fai Wong
On the other hand, end-to-end methods, which rely on edited video pairs for training, offer faster inference speeds but often produce poor editing results due to a lack of high-quality training video pairs.
no code implementations • 6 Feb 2025 • Miaomiao Li, Hao Chen, Yang Wang, Tingyuan Zhu, Weijia Zhang, Kaijie Zhu, Kam-Fai Wong, Jindong Wang
We hope this work can provide valuable insights to the research of LLM data augmentation.
1 code implementation • 16 Dec 2024 • Boyang Xue, Fei Mi, Qi Zhu, Hongru Wang, Rui Wang, Sheng Wang, Erxin Yu, Xuming Hu, Kam-Fai Wong
Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often struggle to accurately express the factual knowledge they possess, especially in cases where the LLMs' knowledge boundaries are ambiguous.
no code implementations • 24 Oct 2024 • Zezhong Wang, Xingshan Zeng, Weiwen Liu, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong
Data quality assessments demonstrate improvements in the naturalness and coherence of our synthesized dialogues.
1 code implementation • 21 Oct 2024 • Yu Zhao, Alessio Devoto, Giwon Hong, Xiaotang Du, Aryo Pradipta Gema, Hongru Wang, Xuanli He, Kam-Fai Wong, Pasquale Minervini
In this work, we propose \textsc{SpARE}, a \emph{training-free} representation engineering method that uses pre-trained sparse auto-encoders (SAEs) to control the knowledge selection behaviour of LLMs.
1 code implementation • 21 Oct 2024 • Yu Zhao, Xiaotang Du, Giwon Hong, Aryo Pradipta Gema, Alessio Devoto, Hongru Wang, Xuanli He, Kam-Fai Wong, Pasquale Minervini
Through probing tasks, we find that LLMs can internally register the signal of knowledge conflict in the residual stream, which can be accurately detected by probing the intermediate model activations.
1 code implementation • 16 Oct 2024 • Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming Du, Bin Liang, Kam-Fai Wong
This paper addresses this gap by introducing a comprehensive investigation of Multilingual Confidence estimation (MlingConf) on LLMs, focusing on both language-agnostic (LA) and language-specific (LS) tasks to explore the performance and language dominance effects of multilingual confidence estimations on different tasks.
1 code implementation • 10 Oct 2024 • Hongru Wang, Rui Wang, Boyang Xue, Heming Xia, Jingtao Cao, Zeming Liu, Jeff Z. Pan, Kam-Fai Wong
In this paper, we introduce \texttt{AppBench}, the first benchmark to evaluate LLMs' ability to plan and execute multiple APIs from various sources in order to complete the user's task.
1 code implementation • 23 Sep 2024 • Jingtao Cao, Zheng Zhang, Hongru Wang, Kam-Fai Wong
Progress in Text-to-Image (T2I) models has significantly improved the generation of images from textual descriptions.
no code implementations • 23 Jun 2024 • Zezhong Wang, Xingshan Zeng, Weiwen Liu, YuFei Wang, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong
To address these questions, we propose a method, namely Chain-of-Probe (CoP), to probe changes in the mind during the model's reasoning.
no code implementations • 17 Jun 2024 • Minda Hu, Licheng Zong, Hongru Wang, Jingyan Zhou, Jingjing Li, Yichen Gao, Kam-Fai Wong, Yu Li, Irwin King
By combining the reasoning capabilities of LLMs with the effectiveness of tree search, SeRTS boosts the zero-shot performance of retrieving high-quality and informative results for RAG.
no code implementations • 14 Jun 2024 • Jingtao Cao, Zheng Zhang, Hongru Wang, Bin Liang, Hao Wang, Kam-Fai Wong
Utilizing the BLIP model for image captioning, PP-OCR and TrOCR for text recognition across multiple languages, and the Qwen LLM for nuanced language understanding, our system is capable of identifying harmful content in memes created in English, Chinese, Malay, and Tamil.
1 code implementation • 25 Mar 2024 • Zhiming Mao, Haoli Bai, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu, Kam-Fai Wong
Prior study shows that pre-training techniques can boost the performance of visual document understanding (VDU), which typically requires models to gain abilities to perceive and reason both document texts and layouts (e. g., locations of texts and table-cells).
1 code implementation • 18 Mar 2024 • Bojia Zi, Shihao Zhao, Xianbiao Qi, Jianan Wang, Yukai Shi, Qianyu Chen, Bin Liang, Kam-Fai Wong, Lei Zhang
To this end, this paper proposes a novel text-guided video inpainting model that achieves better consistency, controllability and compatibility.
no code implementations • 5 Mar 2024 • Rui Wang, Fei Mi, Yi Chen, Boyang Xue, Hongru Wang, Qi Zhu, Kam-Fai Wong, Ruifeng Xu
2) Role Prompting assigns a central prompt to the general domain and a unique role prompt to each specific domain to minimize inter-domain confusion during training.
no code implementations • 26 Feb 2024 • Hongru Wang, Boyang Xue, Baohang Zhou, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Kam-Fai Wong
Conversational retrieval refers to an information retrieval system that operates in an iterative and interactive manner, requiring the retrieval of various external resources, such as persona, knowledge, and even response, to effectively engage with the user and successfully complete the dialogue.
no code implementations • 26 Feb 2024 • Yiming Du, Hongru Wang, Zhengyi Zhao, Bin Liang, Baojun Wang, Wanjun Zhong, Zezhong Wang, Kam-Fai Wong
This dataset is collected to investigate the use of personalized memories, focusing on social interactions and events in the QA task.
1 code implementation • 22 Feb 2024 • Ang Li, Jingqian Zhao, Bin Liang, Lin Gui, Hui Wang, Xi Zeng, Xingwei Liang, Kam-Fai Wong, Ruifeng Xu
Therefore, in this paper, we propose a Counterfactual Augmented Calibration Network (FACTUAL), which a novel calibration network is devised to calibrate potential bias in the stance prediction of LLMs.
no code implementations • 22 Feb 2024 • Han Zhang, Lin Gui, Yu Lei, Yuanzhao Zhai, Yehong Zhang, Yulan He, Hui Wang, Yue Yu, Kam-Fai Wong, Bin Liang, Ruifeng Xu
Reinforcement Learning from Human Feedback (RLHF) is commonly utilized to improve the alignment of Large Language Models (LLMs) with human preferences.
1 code implementation • 22 Feb 2024 • Bin Liang, Ang Li, Jingqian Zhao, Lin Gui, Min Yang, Yue Yu, Kam-Fai Wong, Ruifeng Xu
Stance detection is a challenging task that aims to identify public opinion from social media platforms with respect to specific targets.
1 code implementation • 21 Feb 2024 • Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming Du, Bin Liang, Kam-Fai Wong
This paper addresses this gap by introducing a comprehensive investigation of Multilingual Confidence estimation (MlingConf) on LLMs, focusing on both language-agnostic (LA) and language-specific (LS) tasks to explore the performance and language dominance effects of multilingual confidence estimations on different tasks.
no code implementations • 21 Feb 2024 • Hongru Wang, Boyang Xue, Baohang Zhou, Tianhua Zhang, Cunxiang Wang, Huimin Wang, Guanhua Chen, Kam-Fai Wong
Previous research has typically concentrated on leveraging the internal knowledge of Large Language Models (LLMs) to answer known questions (i. e., \textit{internal reasoning such as generate-then-read}).
no code implementations • 8 Feb 2024 • Lingzhi Wang, Xingshan Zeng, Jinsong Guo, Kam-Fai Wong, Georg Gottlob
In summary, this paper contributes a novel selective unlearning method (SeUL), introduces specialized evaluation metrics (S-EL and S-MA) for assessing sensitive information forgetting, and proposes automatic online and offline sensitive span annotation methods to support the overall unlearning framework and evaluation process.
1 code implementation • 1 Feb 2024 • Luyang Lin, Lingzhi Wang, Xiaoyan Zhao, Jing Li, Kam-Fai Wong
IndiVec begins by constructing a fine-grained media bias database, leveraging the robust instruction-following capabilities of large language models and vector database techniques.
1 code implementation • 30 Jan 2024 • Wai-Chung Kwan, Xingshan Zeng, Yuxin Jiang, YuFei Wang, Liangyou Li, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong
Large language models (LLMs) are increasingly relied upon for complex multi-turn conversations across diverse real-world applications.
no code implementations • 24 Jan 2024 • Hongru Wang, WenYu Huang, Yang Deng, Rui Wang, Zezhong Wang, YuFei Wang, Fei Mi, Jeff Z. Pan, Kam-Fai Wong
To better plan and incorporate the use of multiple sources in generating personalized response, we firstly decompose it into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
no code implementations • 28 Nov 2023 • Hongru Wang, Lingzhi Wang, Yiming Du, Liang Chen, Jingyan Zhou, YuFei Wang, Kam-Fai Wong
This survey delves into the historical trajectory of dialogue systems, elucidating their intricate relationship with advancements in language models by categorizing this evolution into four distinct stages, each marked by pivotal LM breakthroughs: 1) Early_Stage: characterized by statistical LMs, resulting in rule-based or machine-learning-driven dialogue_systems; 2) Independent development of TOD and ODD based on neural_language_models (NLM; e. g., LSTM and GRU), since NLMs lack intrinsic knowledge in their parameters; 3) fusion between different types of dialogue systems with the advert of pre-trained_language_models (PLMs), starting from the fusion between four_sub-tasks_within_TOD, and then TOD_with_ODD; and 4) current LLM-based_dialogue_system, wherein LLMs can be used to conduct TOD and ODD seamlessly.
no code implementations • 16 Nov 2023 • Liang Chen, Yatao Bian, Yang Deng, Deng Cai, Shuaiyi Li, Peilin Zhao, Kam-Fai Wong
Text watermarking has emerged as a pivotal technique for identifying machine-generated text.
1 code implementation • 30 Oct 2023 • Wai-Chung Kwan, Xingshan Zeng, YuFei Wang, Yusen Sun, Liangyou Li, Lifeng Shang, Qun Liu, Kam-Fai Wong
In this paper, we propose M4LE, a Multi-ability, Multi-range, Multi-task, Multi-domain benchmark for Long-context Evaluation.
no code implementations • 24 Oct 2023 • Zezhong Wang, Fangkai Yang, Lu Wang, Pu Zhao, Hongru Wang, Liang Chen, QIngwei Lin, Kam-Fai Wong
Currently, there are two main approaches to address jailbreak attacks: safety training and safeguards.
no code implementations • 13 Oct 2023 • Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Wai-Chung Kwan, Irwin King, Kam-Fai Wong
Open-domain dialogue system usually requires different sources of knowledge to generate more informative and evidential responses.
1 code implementation • 12 Oct 2023 • Boyang Xue, Weichao Wang, Hongru Wang, Fei Mi, Rui Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong
Inspired by previous work which identified that feed-forward networks (FFNs) within Transformers are responsible for factual knowledge expressions, we investigate two methods to efficiently improve the factual expression capability {of FFNs} by knowledge enhancement and alignment respectively.
1 code implementation • 11 Oct 2023 • Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge.
no code implementations • 28 Sep 2023 • Hongru Wang, Huimin Wang, Lingzhi Wang, Minda Hu, Rui Wang, Boyang Xue, Hongyuan Lu, Fei Mi, Kam-Fai Wong
Large language models (LLMs) have demonstrated exceptional performance in planning the use of various functional tools, such as calculators and retrievers, particularly in question-answering tasks.
no code implementations • 5 Sep 2023 • Huimin Wang, Wai-Chung Kwan, Kam-Fai Wong
Recent works usually address Dialog policy learning DPL by training a reinforcement learning (RL) agent to determine the best dialog action.
no code implementations • 5 Sep 2023 • Bojia Zi, Xianbiao Qi, Lingzhi Wang, Jianan Wang, Kam-Fai Wong, Lei Zhang
In this paper, we present Delta-LoRA, which is a novel parameter-efficient approach to fine-tune large language models (LLMs).
1 code implementation • 1 Sep 2023 • Wai-Chung Kwan, Huimin Wang, Hongru Wang, Zezhong Wang, Xian Wu, Yefeng Zheng, Kam-Fai Wong
In addition, JoTR employs reinforcement learning with a reward-shaping mechanism to efficiently finetune the word-level dialogue policy, which allows the model to learn from its interactions, improving its performance over time.
1 code implementation • 17 Jul 2023 • Huimin Wang, Wai-Chung Kwan, Kam-Fai Wong, Yefeng Zheng
Automatic diagnosis (AD), a critical application of AI in healthcare, employs machine learning techniques to assist doctors in gathering patient symptom information for precise disease diagnosis.
1 code implementation • 25 May 2023 • Zhiming Mao, Huimin Wang, Yiming Du, Kam-Fai Wong
Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching.
1 code implementation • 23 May 2023 • Rui Wang, Hongru Wang, Fei Mi, Yi Chen, Boyang Xue, Kam-Fai Wong, Ruifeng Xu
Numerous works are proposed to align large language models (LLMs) with human intents to better fulfill instructions, ensuring they are trustful and helpful.
1 code implementation • 22 May 2023 • Liang Chen, Hongru Wang, Yang Deng, Wai-Chung Kwan, Zezhong Wang, Kam-Fai Wong
Generating persona consistent dialogue response is important for developing an intelligent conversational agent.
2 code implementations • 19 May 2023 • Hongru Wang, Rui Wang, Fei Mi, Yang Deng, Zezhong Wang, Bin Liang, Ruifeng Xu, Kam-Fai Wong
Large Language Models (LLMs), such as \texttt{ChatGPT}, greatly empower dialogue systems with strong language understanding and generation capabilities.
1 code implementation • 11 May 2023 • Lingzhi Wang, Tong Chen, Wei Yuan, Xingshan Zeng, Kam-Fai Wong, Hongzhi Yin
Recent legislation of the "right to be forgotten" has led to the interest in machine unlearning, where the learned models are endowed with the function to forget information about specific training instances as if they have never existed in the training set.
no code implementations • 27 Feb 2023 • Lingzhi Wang, Mrinmaya Sachan, Xingshan Zeng, Kam-Fai Wong
Conversational tutoring systems (CTSs) aim to help students master educational material with natural language interaction in the form of a dialog.
1 code implementation • 11 Oct 2022 • Zhiming Mao, Jian Li, Hongru Wang, Xingshan Zeng, Kam-Fai Wong
Second, existing graph-based NR methods are promising but lack effective news-user feature interaction, rendering the graph-based recommendation suboptimal.
no code implementations • 23 Sep 2022 • Lingzhi Wang, Shafiq Joty, Wei Gao, Xingshan Zeng, Kam-Fai Wong
In addition to conducting experiments on a popular dataset (ReDial), we also include a multi-domain dataset (OpenDialKG) to show the effectiveness of our model.
no code implementations • 28 Feb 2022 • Wai-Chung Kwan, Hongru Wang, Huimin Wang, Kam-Fai Wong
In this paper, we survey recent advances and challenges in dialogue policy from the prescriptive of RL.
no code implementations • 2 Nov 2021 • Hongru Wang, Huimin Wang, Zezhong Wang, Kam-Fai Wong
Reinforcement Learning (RL) has been witnessed its potential for training a dialogue policy agent towards maximizing the accumulated rewards given from users.
no code implementations • 14 Oct 2021 • Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang
Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.
no code implementations • 13 Oct 2021 • Hongru Wang, Zhijing Jin, Jiarun Cao, Gabriel Pui Cheong Fung, Kam-Fai Wong
However, previous works rarely investigate the effects of a different number of classes (i. e., $N$-way) and number of labeled data per class (i. e., $K$-shot) during training vs. testing.
no code implementations • 11 Sep 2021 • Hongru Wang, Mingyu Cui, Zimo Zhou, Gabriel Pui Cheong Fung, Kam-Fai Wong
A multi-turn dialogue always follows a specific topic thread, and topic shift at the discourse level occurs naturally as the conversation progresses, necessitating the model's ability to capture different topics and generate topic-aware responses.
no code implementations • 11 Sep 2021 • Zezhong Wang, Hongru Wang, Kwan Wai Chung, Jia Zhu, Gabriel Pui Cheong Fung, Kam-Fai Wong
To tackle this problem, we propose an effective similarity-based method to select data from the source domains.
1 code implementation • Findings (EMNLP) 2021 • Lingzhi Wang, Xingshan Zeng, Huang Hu, Kam-Fai Wong, Daxin Jiang
In recent years, world business in online discussions and opinion sharing on social media is booming.
1 code implementation • Findings (EMNLP) 2021 • Zhiming Mao, Xingshan Zeng, Kam-Fai Wong
In this work, we propose a news recommendation framework consisting of collaborative news encoding (CNE) and structural user encoding (SUE) to enhance news and user representation learning.
no code implementations • 26 Aug 2021 • Hongru Wang, Zezhong Wang, Wai Chung Kwan, Kam-Fai Wong
Meta-learning is widely used for few-shot slot tagging in task of few-shot learning.
1 code implementation • EMNLP 2020 • Lingzhi Wang, Jing Li, Xingshan Zeng, Haisong Zhang, Kam-Fai Wong
Quotations are crucial for successful explanations and persuasions in interpersonal communications.
1 code implementation • ACL 2021 • Lingzhi Wang, Xingshan Zeng, Kam-Fai Wong
To help individuals express themselves better, quotation recommendation is receiving growing attention.
no code implementations • 17 Nov 2020 • Hongru Wang, Min Li, Zimo Zhou, Gabriel Pui Cheong Fung, Kam-Fai Wong
In this paper, we publish a first Cantonese knowledge-driven Dialogue Dataset for REStaurant (KddRES) in Hong Kong, which grounds the information in multi-turn conversations to one specific restaurant.
no code implementations • ACL 2020 • Huimin Wang, Baolin Peng, Kam-Fai Wong
Training a task-oriented dialogue agent with reinforcement learning is prohibitively expensive since it requires a large volume of interactions with users.
no code implementations • ACL 2020 • Xingshan Zeng, Jing Li, Lu Wang, Zhiming Mao, Kam-Fai Wong
Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner.
no code implementations • SEMEVAL 2020 • Hongru Wang, Xiangru Tang, Sunny Lai, Kwong Sak Leung, Jia Zhu, Gabriel Pui Cheong Fung, Kam-Fai Wong
This paper describes our system submitted to task 4 of SemEval 2020: Commonsense Validation and Explanation (ComVE) which consists of three sub-tasks.
no code implementations • NAACL 2021 • Dingmin Wang, Chenghua Lin, Qi Liu, Kam-Fai Wong
We present a fast and scalable architecture called Explicit Modular Decomposition (EMD), in which we incorporate both classification-based and extraction-based methods and design four modules (for classification and sequence labelling) to jointly extract dialogue states.
no code implementations • IJCNLP 2019 • Xingshan Zeng, Jing Li, Lu Wang, Kam-Fai Wong
The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis.
no code implementations • IJCNLP 2019 • Ming Liao, Jing Li, Haisong Zhang, Lingzhi Wang, Xixin Wu, Kam-Fai Wong
Aspect words, indicating opinion targets, are essential in expressing and understanding human opinions.
no code implementations • ACL 2019 • Jing Ma, Wei Gao, Shafiq Joty, Kam-Fai Wong
Claim verification is generally a task of verifying the veracity of a given claim, which is critical to many downstream applications.
1 code implementation • ACL 2019 • Xingshan Zeng, Jing Li, Lu Wang, Kam-Fai Wong
We hypothesize that both the context of the ongoing conversations and the users' previous chatting history will affect their continued interests in future engagement.
no code implementations • CL 2018 • Jing Li, Yan Song, Zhongyu Wei, Kam-Fai Wong
To address this issue, we organize microblog messages as conversation trees based on their reposting and replying relations, and propose an unsupervised model that jointly learns word distributions to represent: (1) different roles of conversational discourse, and (2) various latent topics in reflecting content information.
no code implementations • 11 Sep 2018 • Jing Li, Yan Song, Zhongyu Wei, Kam-Fai Wong
To address this issue, we organize microblog messages as conversation trees based on their reposting and replying relations, and propose an unsupervised model that jointly learns word distributions to represent: 1) different roles of conversational discourse, 2) various latent topics in reflecting content information.
no code implementations • ACL 2018 • Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-Fai Wong, Xiangying Dai
In this paper, we make a move to build a dialogue system for automatic diagnosis.
1 code implementation • ACL 2018 • Jing Ma, Wei Gao, Kam-Fai Wong
Automatic rumor detection is technically very challenging.
no code implementations • NAACL 2018 • Xingshan Zeng, Jing Li, Lu Wang, Nicholas Beauchamp, Sarah Shugars, Kam-Fai Wong
We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics.
3 code implementations • ACL 2018 • Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Kam-Fai Wong, Shang-Yu Su
During dialogue policy learning, the world model is constantly updated with real user experience to approach real user behavior, and in turn, the dialogue agent is optimized using both real experience and simulated experience.
no code implementations • IJCNLP 2017 • Liang-Chih Yu, Lung-Hao Lee, Jin Wang, Kam-Fai Wong
This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions.
no code implementations • WS 2017 • Gabriel Fung, Maxime Debosschere, Dingmin Wang, Bo Li, Jia Zhu, Kam-Fai Wong
This paper provides an overview along with our findings of the Chinese Spelling Check shared task at NLPTEA 2017.
no code implementations • 31 Oct 2017 • Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, Kam-Fai Wong
This paper presents a new method --- adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems.
no code implementations • ACL 2017 • Jing Ma, Wei Gao, Kam-Fai Wong
How fake news goes viral via social media?
no code implementations • EMNLP 2017 • Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, Kam-Fai Wong
Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks.
no code implementations • EACL 2017 • Baolin Peng, Michael Seltzer, Y.C. Ju, Geoffrey Zweig, Kam-Fai Wong
This is motivated by an actual system under development to assist in the order taking process.
no code implementations • COLING 2016 • Shichao Dong, Gabriel Pui Cheong Fung, Binyang Li, Baolin Peng, Ming Liao, Jia Zhu, Kam-Fai Wong
We present a system called ACE for Automatic Colloquialism and Errors detection for written Chinese.
no code implementations • 3 Jun 2016 • Kaisheng Yao, Baolin Peng, Geoffrey Zweig, Kam-Fai Wong
Experimental results indicate that the model outperforms previously proposed neural conversation architectures, and that using specificity in the objective function significantly improves performances for both generation and retrieval.
1 code implementation • 22 Aug 2015 • Baolin Peng, Zhengdong Lu, Hang Li, Kam-Fai Wong
For example, it improves the accuracy on Path Finding(10K) from 33. 4% [6] to over 98%.
no code implementations • LREC 2014 • Lanjun Zhou, Binyang Li, Zhongyu Wei, Kam-Fai Wong
The lack of open discourse corpus for Chinese brings limitations for many natural language processing tasks.
no code implementations • LREC 2012 • Yulan He, Hassan Saif, Zhongyu Wei, Kam-Fai Wong
There have been increasing interests in recent years in analyzing tweet messages relevant to political events so as to understand public opinions towards certain political issues.