1 code implementation • COLING 2022 • Weixiang Zhao, Yanyan Zhao, Bing Qin
Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module.
no code implementations • Findings (EMNLP) 2021 • Xin Lu, Yijian Tian, Yanyan Zhao, Bing Qin
To address this problem, we propose a simple and effective Retrieve-Discriminate-Rewrite framework.
no code implementations • COLING 2022 • Jianhua Yuan, Yanyan Zhao, Yanyue Lu, Bing Qin
Motivated by how humans tackle stance detection tasks, we propose to incorporate the stance reasoning process as task knowledge to assist in learning genuine features and reducing reliance on bias features.
no code implementations • 12 Jun 2024 • Hao Yang, Yanyan Zhao, Yang Wu, Shilong Wang, Tian Zheng, Hongbo Zhang, Zongyang Ma, Wanxiang Che, Bing Qin
Compared to traditional sentiment analysis, which only considers text, multimodal sentiment analysis needs to consider emotional signals from multimodal sources simultaneously and is therefore more consistent with the way how humans process sentiment in real-world scenarios.
no code implementations • 4 Jun 2024 • Bichen Wang, Yuzhe Zi, Yixin Sun, Yanyan Zhao, Bing Qin
With the passage of the Right to Be Forgotten (RTBF) regulations and the scaling up of language model training datasets, research on model unlearning in large language models (LLMs) has become more crucial.
no code implementations • 22 May 2024 • Weixiang Zhao, Yulin Hu, Zhuojun Li, Yang Deng, Yanyan Zhao, Bing Qin, Tat-Seng Chua
Safety alignment of large language models (LLMs) has been gaining increasing attention.
no code implementations • 4 Mar 2024 • Xin Lu, Yanyan Zhao, Bing Qin
However, studies have indicated that MoE Transformers underperform vanilla Transformers in many downstream tasks, significantly diminishing the practical value of MoE models.
no code implementations • 4 Mar 2024 • Xin Lu, Yanyan Zhao, Bing Qin
In this work, we attempt to explain and reverse the decline in base capabilities caused by the architecture of FFN-Wider Transformers, seeking to provide some insights.
1 code implementation • 15 Feb 2024 • Weixiang Zhao, Zhuojun Li, Shilong Wang, Yang Wang, Yulin Hu, Yanyan Zhao, Chen Wei, Bing Qin
Emotional Intelligence (EI), consisting of emotion perception, emotion cognition and emotion expression, plays the critical roles in improving user interaction experience for the current large language model (LLM) based conversational general AI assistants.
no code implementations • 16 Jan 2024 • Weixiang Zhao, Shilong Wang, Yulin Hu, Yanyan Zhao, Bing Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
Existing methods devise the learning module to acquire task-specific knowledge with parameter-efficient tuning (PET) block and the selection module to pick out the corresponding one for the testing input, aiming at handling the challenges of catastrophic forgetting and knowledge transfer in CL.
1 code implementation • 25 Oct 2023 • Yang Wu, Shilong Wang, Hao Yang, Tian Zheng, Hongbo Zhang, Yanyan Zhao, Bing Qin
In this paper, we evaluate different abilities of GPT-4V including visual understanding, language understanding, visual puzzle solving, and understanding of other modalities such as depth, thermal, video, and audio.
no code implementations • 23 May 2023 • Hao Yang, Can Gao, Hao Líu, Xinyan Xiao, Yanyan Zhao, Bing Qin
The experimental results show that our model achieves state-of-the-art performance in various downstream tasks, and through ablation study can prove that effective cross-layer learning improves the model's ability of multimodal representation.
no code implementations • 8 May 2023 • Yang Wu, Yanyan Zhao, Zhongyang Li, Bing Qin, Kai Xiong
Instruction tuning has been shown to be able to improve cross-task generalization of language models.
1 code implementation • 5 May 2023 • Weixiang Zhao, Yanyan Zhao, Shilong Wang, Bing Qin
Specifically, we construct the state transition graph with a two-step way, named transit-then-interact, to grasp such three types of turn-level transition information.
no code implementations • 19 Apr 2023 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, Bing Qin
This report presents a study on the emotional dialogue capability of ChatGPT, an advanced language model developed by OpenAI.
no code implementations • 20 Dec 2022 • Jianhua Yuan, Yanyan Zhao, Bing Qin
Stance detection models may tend to rely on dataset bias in the text part as a shortcut and thus fail to sufficiently learn the interaction between the targets and texts.
1 code implementation • 6 Dec 2022 • Weixiang Zhao, Yanyan Zhao, Zhuojun Li, Bing Qin
Moreover, social-interaction CSK serves as emotion-level bridge (E-bridge) and action-level bridge (A-bridge) to connect candidate utterances with the target one, which provides explicit causal clues for the Emotional Interaction module and Actional Interaction module to reason the target emotion.
Ranked #4 on Causal Emotion Entailment on RECCON
1 code implementation • 8 Oct 2022 • Weixiang Zhao, Yanyan Zhao, Xin Lu, Bing Qin
As a critical step to achieve human-like chatbots, empathetic response generation has attained increasing interests.
no code implementations • 20 Sep 2022 • Yang Wu, Pai Peng, Zhenyu Zhang, Yanyan Zhao, Bing Qin
At the low-level, we propose the progressive tri-modal attention, which can model the tri-modal feature interactions by adopting a two-pass strategy and can further leverage such interactions to significantly reduce the computation and memory complexity through reducing the input token length.
no code implementations • 6 Sep 2022 • Pengfei Deng, Jianhua Yuan, Yanyan Zhao, Bing Qin
Our key intuition is that the sentiment representation of a document is composed of the sentiment representations of all the aspects of that document.
no code implementations • 28 Jun 2022 • Hao Yang, Yanyan Zhao, Jianwei Liu, Yang Wu, Bing Qin
In this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment Analysis (MACSA) dataset, which contains more than 21K text-image pairs.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • 7 Jun 2021 • Gaode Chen, Xinghua Zhang, Yanyan Zhao, Cong Xue, Ji Xiang
Meanwhile, an ingenious graph is proposed to enhance the interactivity between items in user's behavior sequence, which can capture both global and local item features.
no code implementations • 17 Apr 2021 • Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu
To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
General Classification Multi-Domain Sentiment Classification +3
no code implementations • COLING 2020 • Xin Lu, Yanyan Zhao, Yang Wu, Yijian Tian, Huipeng Chen, Bing Qin
We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time.
Ranked #44 on Emotion Recognition in Conversation on IEMOCAP
no code implementations • SEMEVAL 2016 • Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Man, Suresh har, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph{\'e}e De Clercq, V{\'e}ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar{\'\i}a Jim{\'e}nez-Zafra, G{\"u}l{\c{s}}en Eryi{\u{g}}it
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2