no code implementations • CCL 2022 • Bin Liang, Zijie Lin, Bing Qin, Ruifeng Xu
“现有的文本讽刺识别研究通常只停留在句子级别的讽刺表达分类, 缺乏考虑讽刺对象对讽刺表达的影响。针对这一问题, 本文提出一个新的面向话题的讽刺识别任务。该任务通过话题的引入, 以话题作为讽刺对象, 有助于更好地理解和建模讽刺表达。对应地, 本文构建了一个新的面向话题的讽刺识别数据集。这个数据集包含了707个话题, 以及对应的4871个话题-评论对组。在此基础上, 基于提示学习和大规模预训练语言模型, 提出了一种面向话题的讽刺表达提示学习模型。在本文构建的面向话题讽刺识别数据集上的实验结果表明, 相比基线模型, 本文所提出的面向话题的讽刺表达提示学习模型取得了更优的性能。同时, 实验分析也表明本文提出的面向话题的讽刺识别任务相比传统的句子级讽刺识别任务更具挑战性。”
1 code implementation • dialdoc (ACL) 2022 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently.
no code implementations • Findings (ACL) 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
no code implementations • Findings (EMNLP) 2021 • Xin Lu, Yijian Tian, Yanyan Zhao, Bing Qin
To address this problem, we propose a simple and effective Retrieve-Discriminate-Rewrite framework.
1 code implementation • COLING 2022 • Weixiang Zhao, Yanyan Zhao, Bing Qin
Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module.
no code implementations • COLING 2022 • Jianhua Yuan, Yanyan Zhao, Yanyue Lu, Bing Qin
Motivated by how humans tackle stance detection tasks, we propose to incorporate the stance reasoning process as task knowledge to assist in learning genuine features and reducing reliance on bias features.
1 code implementation • COLING 2022 • Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu
To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.
1 code implementation • EMNLP 2021 • Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.
1 code implementation • EMNLP 2021 • Xinwei Geng, Xiaocheng Feng, Bing Qin
Towards keeping the consistency of data distribution with iterative decoding, an iterative training strategy is employed to further improve the capacity of rewriting.
1 code implementation • Findings (EMNLP) 2021 • Haichao Zhu, Zekun Wang, Heng Zhang, Ming Liu, Sendong Zhao, Bing Qin
Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation.
1 code implementation • 23 Aug 2024 • Li Du, Zhouhao Sun, Xiao Ding, Yixuan Ma, Yang Zhao, Kaitao Qiu, Ting Liu, Bing Qin
Although achieving promising performance, recent analyses show that current generative large language models (LLMs) may still capture dataset biases and utilize them for generation, leading to poor generalizability and harmfulness of LLMs.
no code implementations • 21 Aug 2024 • Kai Xiong, Xiao Ding, Li Du, Jiahao Ying, Ting Liu, Bing Qin, Yixin Cao
This makes it a challenge to diagnose and remedy the deficiencies of LLMs through rich label-free user queries.
1 code implementation • 8 Aug 2024 • Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu, Weihong Zhong, Xiachong Feng, Weijiang Yu, Weihua Peng, Duyu Tang, Dandan Tu, Bing Qin
Despite the impressive performance on information-seeking tasks, large language models (LLMs) still struggle with hallucinations.
no code implementations • 6 Aug 2024 • Jinglong Gao, Chen Lu, Xiao Ding, Zhongyang Li, Ting Liu, Bing Qin
However, existing fine-tuning based ECE methods cannot address all three key challenges in ECE simultaneously: 1) Complex Causality Extraction, where multiple causal-effect pairs occur within a single sentence; 2) Subtask~ Interaction, which involves modeling the mutual dependence between the two subtasks of ECE, i. e., extracting events and identifying the causal relationship between extracted events; and 3) Knowledge Fusion, which requires effectively fusing the knowledge in two modalities, i. e., the expressive pretrained language models and the structured knowledge graphs.
no code implementations • 12 Jul 2024 • Jinglong Gao, Xiao Ding, Yiming Cui, Jianbai Zhao, Hepeng Wang, Ting Liu, Bing Qin
To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts.
1 code implementation • 3 Jul 2024 • Zike Yuan, Ming Liu, Hui Wang, Bing Qin
Evaluating the graph comprehension and reasoning abilities of Large Language Models (LLMs) is challenging and often incomplete.
1 code implementation • 30 Jun 2024 • Weihong Zhong, Xiaocheng Feng, Liang Zhao, Qiming Li, Lei Huang, Yuxuan Gu, Weitao Ma, Yuan Xu, Bing Qin
To mitigate this, we further propose a training-free method called Residual Visual Decoding, where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information.
no code implementations • 28 Jun 2024 • Zheng Chu, Jingchang Chen, Qianglong Chen, Haotian Wang, Kun Zhu, Xiyuan Du, Weijiang Yu, Ming Liu, Bing Qin
For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory.
no code implementations • 26 Jun 2024 • Jiafeng Liang, Shixin Jiang, Zekun Wang, Haojie Pan, Zerui Chen, Zheng Chu, Ming Liu, Ruiji Fu, Zhongyuan Wang, Bing Qin
Our proposed benchmark consists of three sub-tasks to evaluate comprehension ability of models: (1) Step Captioning: models have to generate captions for specific steps from videos.
no code implementations • 26 Jun 2024 • MuZhen Cai, Sendong Zhao, Haochun Wang, Yanrui Du, Zewen Qiang, Bing Qin, Ting Liu
Artificial Intelligence predicts drug properties by encoding drug molecules, aiding in the rapid screening of candidates.
no code implementations • 22 Jun 2024 • Weitao Ma, Xiaocheng Feng, Weihong Zhong, Lei Huang, Yangfan Ye, Xiachong Feng, Bing Qin
Large language model unlearning has garnered increasing attention due to its potential to address security and privacy concerns, leading to extensive research in the field.
no code implementations • 12 Jun 2024 • Hao Yang, Yanyan Zhao, Yang Wu, Shilong Wang, Tian Zheng, Hongbo Zhang, Zongyang Ma, Wanxiang Che, Bing Qin
Compared to traditional sentiment analysis, which only considers text, multimodal sentiment analysis needs to consider emotional signals from multimodal sources simultaneously and is therefore more consistent with the way how humans process sentiment in real-world scenarios.
1 code implementation • 8 Jun 2024 • Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, Bing Qin
In proactive dialogue, the challenge lies not just in generating responses but in steering conversations toward predetermined goals, a task where Large Language Models (LLMs) typically struggle due to their reactive nature.
1 code implementation • 5 Jun 2024 • Hongling Xu, Qianlong Wang, Yice Zhang, Min Yang, Xi Zeng, Bing Qin, Ruifeng Xu
Large language models (LLMs) have achieved promising results in sentiment analysis through the in-context learning (ICL) paradigm.
no code implementations • 4 Jun 2024 • Bichen Wang, Yuzhe Zi, Yixin Sun, Yanyan Zhao, Bing Qin
With the passage of the Right to Be Forgotten (RTBF) regulations and the scaling up of language model training datasets, research on model unlearning in large language models (LLMs) has become more crucial.
1 code implementation • 3 Jun 2024 • Kun Zhu, Xiaocheng Feng, Xiyuan Du, Yuxuan Gu, Weijiang Yu, Haotian Wang, Qianglong Chen, Zheng Chu, Jingchang Chen, Bing Qin
Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data.
no code implementations • 30 May 2024 • Jingchang Chen, Hongxuan Tang, Zheng Chu, Qianglong Chen, Zekun Wang, Ming Liu, Bing Qin
To this end, we propose FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus.
1 code implementation • 23 May 2024 • Yanrui Du, Sendong Zhao, Danyang Zhao, Ming Ma, Yuhan Chen, Liangyu Huo, Qing Yang, Dongliang Xu, Bing Qin
When encountering malicious instructions, the router will assign a higher weight to the safe LLM to ensure that responses are harmless.
no code implementations • 22 May 2024 • Weixiang Zhao, Yulin Hu, Zhuojun Li, Yang Deng, Yanyan Zhao, Bing Qin, Tat-Seng Chua
Safety alignment of large language models (LLMs) has been gaining increasing attention.
1 code implementation • 19 Apr 2024 • Yichong Huang, Xiaocheng Feng, Baohang Li, Yang Xiang, Hui Wang, Bing Qin, Ting Liu
To address this challenge, DeePEn maps the probability distribution of each model from its own probability space to a universal relative space based on the relative representation theory, and performs aggregation.
1 code implementation • 25 Mar 2024 • Yirong Zeng, Xiao Ding, Yi Zhao, Xiangyu Li, Jie Zhang, Chao Yao, Ting Liu, Bing Qin
Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation.
no code implementations • 14 Mar 2024 • Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao
Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.
1 code implementation • 13 Mar 2024 • Fangqi Zhu, Yongqi Zhang, Lei Chen, Bing Qin, Ruifeng Xu
Adverse drug-drug interactions~(DDIs) can compromise the effectiveness of concurrent drug administration, posing a significant challenge in healthcare.
no code implementations • 4 Mar 2024 • Xin Lu, Yanyan Zhao, Bing Qin
In this work, we attempt to explain and reverse the decline in base capabilities caused by the architecture of FFN-Wider Transformers, seeking to provide some insights.
no code implementations • 4 Mar 2024 • Xin Lu, Yanyan Zhao, Bing Qin
However, studies have indicated that MoE Transformers underperform vanilla Transformers in many downstream tasks, significantly diminishing the practical value of MoE models.
no code implementations • 4 Mar 2024 • Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu
Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.
no code implementations • 18 Feb 2024 • Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Zhouhao Sun, Jun Shi, Ting Liu, Bing Qin
Through pretraining on a corpus with various sources, Large Language Models (LLMs) have gained impressive performance.
1 code implementation • 15 Feb 2024 • Weixiang Zhao, Zhuojun Li, Shilong Wang, Yang Wang, Yulin Hu, Yanyan Zhao, Chen Wei, Bing Qin
Emotional Intelligence (EI), consisting of emotion perception, emotion cognition and emotion expression, plays the critical roles in improving user interaction experience for the current large language model (LLM) based conversational general AI assistants.
no code implementations • 2 Feb 2024 • Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu
In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.
Multiple-choice Multiple Choice Question Answering (MCQA) +1
no code implementations • 29 Jan 2024 • Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu
Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.
1 code implementation • 21 Jan 2024 • Haoqiang Guo, Sendong Zhao, Haochun Wang, Yanrui Du, Bing Qin
The agent accentuates task-relevant features in the molecular representation by understanding the natural language description of the task, just as a tailor customizes clothes for clients.
no code implementations • 16 Jan 2024 • Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.
no code implementations • 10 Jan 2024 • Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin
To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.
1 code implementation • 6 Jan 2024 • Yaojia LV, Haojie Pan, Ruiji Fu, Ming Liu, Zhongyuan Wang, Bing Qin
Cognitive dynamics are pivotal to advance human understanding of the world.
no code implementations • 28 Dec 2023 • Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu
In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.
no code implementations • 22 Dec 2023 • Zhangyin Feng, Runyi Hu, Liangxin Liu, Fan Zhang, Duyu Tang, Yong Dai, Xiaocheng Feng, Jiwei Li, Bing Qin, Shuming Shi
Compared with autoregressive baselines that needs to run one thousand times, our model only runs 16 times to generate images of competitive quality with an order of magnitude lower inference latency.
1 code implementation • 8 Dec 2023 • Haojie Pan, Zepeng Zhai, Hao Yuan, Yaojia LV, Ruiji Fu, Ming Liu, Zhongyuan Wang, Bing Qin
Driven by curiosity, humans have continually sought to explore and understand the world around them, leading to the invention of various tools to satiate this inquisitiveness.
no code implementations • 7 Dec 2023 • Yanrui Du, Sendong Zhao, Ming Ma, Yuhan Chen, Bing Qin
The jailbreak idea of our method is "Inherent Response Tendency Analysis" which identifies real-world instructions that can inherently induce LLMs to generate affirmation responses and the corresponding jailbreak strategy is "Real-World Instructions-Driven Jailbreak" which involves strategically splicing real-world instructions identified through the above analysis around the malicious instruction.
1 code implementation • 29 Nov 2023 • Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, Bing Qin
Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world.
no code implementations • 10 Nov 2023 • Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.
1 code implementation • 9 Nov 2023 • Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.
no code implementations • 8 Nov 2023 • Zheng Chu, Zekun Wang, Jiafeng Liang, Ming Liu, Bing Qin
To address this issue, we propose MTGER, a novel Multi-view Temporal Graph Enhanced Temporal Reasoning framework for temporal reasoning over time-involved documents.
1 code implementation • 25 Oct 2023 • Yang Wu, Shilong Wang, Hao Yang, Tian Zheng, Hongbo Zhang, Yanyan Zhao, Bing Qin
In this paper, we evaluate different abilities of GPT-4V including visual understanding, language understanding, visual puzzle solving, and understanding of other modalities such as depth, thermal, video, and audio.
no code implementations • 20 Oct 2023 • Yanrui Du, Sendong Zhao, Haochun Wang, Yuhan Chen, Rui Bai, Zewen Qiang, MuZhen Cai, Bing Qin
Through extensive experiments on five reasoning datasets from the ERASER benchmark, we demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.
1 code implementation • 8 Oct 2023 • Zhangyin Feng, Xiaocheng Feng, Dezhi Zhao, Maojin Yang, Bing Qin
Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks.
1 code implementation • 27 Sep 2023 • Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu
We hope this paper serves as an introduction for beginners and fosters future research.
1 code implementation • 11 Sep 2023 • Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin
Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu
Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.
1 code implementation • 8 Sep 2023 • Yanrui Du, Sendong Zhao, MuZhen Cai, Ming Ma, Danyang Zhao, Jiawei Cao, Bing Qin
We conduct several experiments to analyze the dual logic ability of LLMs by examining the consistency of the stance in responses to paired questions about the same fact.
no code implementations • 8 Sep 2023 • Yanrui Du, Sendong Zhao, Yuhan Chen, Rai Bai, Jing Liu, Hua Wu, Haifeng Wang, Bing Qin
To address this issue, it is crucial to analyze and mitigate the influence of superficial clues on STM models.
no code implementations • 7 Aug 2023 • Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin
However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature.
no code implementations • 6 Jul 2023 • Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu
In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.
no code implementations • 29 Jun 2023 • Tao He, Ming Liu, Yixin Cao, Zekun Wang, Zihao Zheng, Zheng Chu, Bing Qin
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
no code implementations • 28 Jun 2023 • Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng, Shuangzhi Wu, Bing Qin, Yunbo Cao, Shuming Shi
Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge.
1 code implementation • 27 May 2023 • Fangqi Zhu, Lin Zhang, Jun Gao, Bing Qin, Ruifeng Xu, Haiqin Yang
Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task.
no code implementations • 26 May 2023 • Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, Bing Qin
Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation.
1 code implementation • 25 May 2023 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Baohang Li, Bing Qin
Multilingual neural machine translation has witnessed remarkable progress in recent years.
no code implementations • 24 May 2023 • Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin
Despite achieving remarkable performance on various vision-language tasks, Transformer-based Vision-Language Models (VLMs) suffer from redundancy in inputs and parameters, significantly hampering their efficiency in real-world applications.
no code implementations • 23 May 2023 • Hao Yang, Can Gao, Hao Líu, Xinyan Xiao, Yanyan Zhao, Bing Qin
The experimental results show that our model achieves state-of-the-art performance in various downstream tasks, and through ablation study can prove that effective cross-layer learning improves the model's ability of multimodal representation.
1 code implementation • 19 May 2023 • Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin
Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.
1 code implementation • 18 May 2023 • Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu
To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance.
1 code implementation • 12 May 2023 • Jinglong Gao, Xiao Ding, Bing Qin, Ting Liu
Causal reasoning ability is crucial for numerous NLP applications.
no code implementations • 8 May 2023 • Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong
Instruction tuning has been shown to be able to improve cross-task generalization of language models.
1 code implementation • 5 May 2023 • Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin
Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information.
no code implementations • 2 May 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Generative agents that simulate human society show tremendous potential for further research and practical applications.
no code implementations • 19 Apr 2023 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, Bing Qin
This report presents a study on the emotional dialogue capability of ChatGPT, an advanced language model developed by OpenAI.
1 code implementation • 14 Apr 2023 • Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu
Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.
no code implementations • 12 Apr 2023 • Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin
As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.
1 code implementation • 7 Apr 2023 • Kun Zhu, Xiaocheng Feng, Xiachong Feng, Yingsheng Wu, Bing Qin
Scientific literature review generation aims to extract and organize important information from an abundant collection of reference papers and produces corresponding reviews while lacking a clear and logical hierarchy.
no code implementations • 20 Feb 2023 • Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin
Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.
no code implementations • 23 Jan 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy.
no code implementations • 20 Dec 2022 • Jianhua Yuan, Yanyan Zhao, Bing Qin
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts.
1 code implementation • 16 Dec 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.
1 code implementation • 16 Dec 2022 • Kai Xiong, Xiao Ding, Zhongyang Li, Li Du, Bing Qin, Yi Zheng, Baoxing Huai
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs.
1 code implementation • 6 Dec 2022 • Weixiang Zhao, Yanyan Zhao, Zhuojun Li, Bing Qin
Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion.
Ranked #4 on Causal Emotion Entailment on RECCON
no code implementations • 7 Nov 2022 • Ming Liu, Yaojia LV, Jingrun Zhang, Ruiji Fu, Bing Qin
One is that it supports querying any Chinese named entity and browsing the extracted hypernym-hyponym paths surro-unding the query entity.
1 code implementation • 28 Oct 2022 • Haojie Pan, Zepeng Zhai, Yuzhou Zhang, Ruiji Fu, Ming Liu, Yangqiu Song, Zhongyuan Wang, Bing Qin
In this paper, we propose Kuaipedia, a large-scale multi-modal encyclopedia consisting of items, aspects, and short videos lined to them, which was extracted from billions of videos of Kuaishou (Kwai), a well-known short-video platform in China.
1 code implementation • 8 Oct 2022 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Bing Qin
As a critical step to achieve human-like chatbots, empathetic response generation has attained increasing interests.
1 code implementation • 6 Oct 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
no code implementations • 20 Sep 2022 • Yang Wu, Pai Peng, Zhenyu Zhang, Yanyan Zhao, Bing Qin
At the low-level, we propose the progressive tri-modal attention, which can model the tri-modal feature interactions by adopting a two-pass strategy and can further leverage such interactions to significantly reduce the computation and memory complexity through reducing the input token length.
1 code implementation • COLING 2022 • Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu
Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.
no code implementations • 6 Sep 2022 • Pengfei Deng, Jianhua Yuan, Yanyan Zhao, Bing Qin
Our key intuition is that the sentiment representation of a document is composed of the sentiment representations of all the aspects of that document.
no code implementations • 21 Aug 2022 • Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu
To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.
1 code implementation • 26 Jul 2022 • Zhenran Xu, Zifei Shan, Yuxin Li, Baotian Hu, Bing Qin
We then establish a strong baseline that scores a R@1 of 46. 2% on Few-Shot and 76. 6% on Zero-Shot on our dataset.
no code implementations • 4 Jul 2022 • Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin
In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.
no code implementations • 28 Jun 2022 • Hao Yang, Yanyan Zhao, Jianwei Liu, Yang Wu, Bing Qin
In this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment Analysis (MACSA) dataset, which contains more than 21K text-image pairs.
1 code implementation • 25 May 2022 • Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin
In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
1 code implementation • 3 May 2022 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Bing Qin
In this paper, we propose a novel training strategy named LSSD (Language-Specific Self-Distillation), which can alleviate the convergence inconsistency and help MNMT models achieve the best performance on each language pair simultaneously.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 24 Feb 2022 • Zhangyin Feng, Duyu Tang, Cong Zhou, Junwei Liao, Shuangzhi Wu, Xiaocheng Feng, Bing Qin, Yunbo Cao, Shuming Shi
(2) how to predict a word via cloze test without knowing the number of wordpieces in advance?
2 code implementations • 16 Dec 2021 • Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei
We propose a cross-modal attention distillation framework to train a dual-encoder model for vision-language understanding tasks, such as visual reasoning and visual question answering.
no code implementations • 20 Aug 2021 • Yibo Sun, Jizhou Huang, Chunyuan Yuan, Miao Fan, Haifeng Wang, Ming Liu, Bing Qin
We approach this task as a sequence tagging problem, where the goal is to produce <POI name, accessibility label> pairs from unstructured text.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Ting Liu, Bing Qin
Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu
Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.
no code implementations • 7 Jul 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin
We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
1 code implementation • 30 Apr 2021 • Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin
Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.
no code implementations • 26 Apr 2021 • Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, Ting Liu
Multiparty Dialogue Machine Reading Comprehension (MRC) differs from traditional MRC as models must handle the complex dialogue discourse structure, previously unconsidered in traditional MRC.
Ranked #4 on Question Answering on Molweni
no code implementations • 17 Apr 2021 • Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu
To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
General Classification Multi-Domain Sentiment Classification +3
1 code implementation • 7 Dec 2020 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.
no code implementations • 2 Dec 2020 • Sendong Zhao, Bing Qin, Ting Liu, Fei Wang
This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.
no code implementations • COLING 2020 • Xin Lu, Yanyan Zhao, Yang Wu, Yijian Tian, Huipeng Chen, Bing Qin
We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time.
Ranked #44 on Emotion Recognition in Conversation on IEMOCAP
no code implementations • SEMEVAL 2020 • Xiao Ding, Dingkui Hao, Yuewei Zhang, Kuo Liao, Zhongyang Li, Bing Qin, Ting Liu
In this task, we dedicate to detecting causation, especially counterfactuals from texts.
1 code implementation • COLING 2020 • Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Heng Gong, Wei Bi, Xiaocheng Feng, Bing Qin, Xiaojiang Liu, Ting Liu
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
1 code implementation • CCL 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
1 code implementation • ACL 2020 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
Ranked #13 on Stock Market Prediction on Astock
1 code implementation • COLING 2020 • Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, Bing Qin
Research into the area of multiparty dialog has grown considerably over recent years.
Ranked #7 on Discourse Parsing on Molweni
1 code implementation • 24 Feb 2020 • Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on Code Documentation Generation on CodeSearchNet - Go
no code implementations • 8 Nov 2019 • Jiaqi Li, Ming Liu, Bing Qin, Zihao Zheng, Ting Liu
In this paper, we propose the scheme for annotating large-scale multi-party chat dialogues for discourse parsing and machine comprehension.
no code implementations • 8 Nov 2019 • Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu
The limited size of existing query-focused summarization datasets renders training data-driven summarization models challenging.
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
no code implementations • IJCNLP 2019 • Shuang Chen, Jinpeng Wang, Xiaocheng Feng, Feng Jiang, Bing Qin, Chin-Yew Lin
Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge.
no code implementations • 12 Sep 2019 • Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.
1 code implementation • IJCNLP 2019 • Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
no code implementations • 26 Jun 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Conditions are essential in the statements of biological literature.
2 code implementations • 19 Jun 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
no code implementations • ACL 2019 • Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu
We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Ming Liu, Bing Qin, Ting Liu
This paper investigates an attention-based automatic paradigm called TransATT for attribute acquisition, by learning the representation of hierarchical classes and attributes in Chinese ontology.
no code implementations • 8 Mar 2019 • Tianwen Jiang, Sendong Zhao, Jing Liu, Jin-Ge Yao, Ming Liu, Bing Qin, Ting Liu, Chin-Yew Lin
Time-DS is composed of a time series instance-popularity and two strategies.
no code implementations • 26 Dec 2018 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.
1 code implementation • EMNLP 2018 • Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, Ting Liu
In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.
Ranked #2 on AMR Parsing on LDC2014T12:
no code implementations • EMNLP 2018 • Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu
Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.
no code implementations • 12 Sep 2018 • Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, Bing Qin
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.
no code implementations • 12 Sep 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jingjing Xu, Xiaocheng Feng, Bing Qin
Results show that our knowledge-aware model outperforms the state-of-the-art approaches.
1 code implementation • ACL 2018 • Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
no code implementations • ACL 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jianshu ji, Guihong Cao, Xiaocheng Feng, Bing Qin, Ting Liu, Ming Zhou
We present a generative model to map natural language questions into SQL queries.
Ranked #4 on Code Generation on WikiSQL
1 code implementation • NAACL 2018 • Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, Noah A. Smith
Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD.
Ranked #2 on Dependency Parsing on Tweebank
no code implementations • COLING 2016 • Xiaocheng Feng, Duyu Tang, Bing Qin, Ting Liu
Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks.
no code implementations • 7 Nov 2016 • Luyang Li, Bing Qin, Wenjing Ren, Ting Liu
We use feedforward memory network and feedback memory network to learn the representation of the credibility of statements which are about the same object.
no code implementations • SEMEVAL 2016 • Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Man, Suresh har, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph{\'e}e De Clercq, V{\'e}ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar{\'\i}a Jim{\'e}nez-Zafra, G{\"u}l{\c{s}}en Eryi{\u{g}}it
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
8 code implementations • EMNLP 2016 • Duyu Tang, Bing Qin, Ting Liu
Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory.
Aspect-Based Sentiment Analysis (ABSA) General Classification +1