1 code implementation • 21 Sep 2023 • Shanglin Lei, Guanting Dong, XiaoPing Wang, Keheng Wang, Sirui Wang
The field of emotion recognition of conversation (ERC) has been focusing on separating sentence feature encoding and context modeling, lacking exploration in generative paradigms based on unified designs.
Ranked #2 on Emotion Recognition in Conversation on MELD
no code implementations • 18 Sep 2023 • Shanglin Lei, XiaoPing Wang, Guanting Dong, Jiang Li, Yingjian Liu
Our model achieves state-of-the-art performance on three datasets, demonstrating the superiority of our work.
no code implementations • 12 Aug 2023 • Jiang Li, XiaoPing Wang, Yingjian Liu, Zhigang Zeng
We utilize TE and SE to combine the strengths of previous methods in a simplistic manner to efficiently capture temporal and spatial contextual information in the conversation.
1 code implementation • 28 Jul 2023 • Jiang Li, XiaoPing Wang, Yingjian Liu, Zhigang Zeng
RUME is applied to extract conversation-level contextual emotional cues while pulling together data distributions between modalities; ACME is utilized to perform multimodal interaction centered on textual modality; LESM is used to model emotion shift and capture emotion-shift information, thereby guiding the learning of the main task.
Ranked #8 on Emotion Recognition in Conversation on IEMOCAP
Emotion Recognition in Conversation Multimodal Emotion Recognition
no code implementations • 2 Jul 2023 • Jiang Li, XiaoPing Wang, Zhigang Zeng
How to model the context in a conversation is a central aspect and a major challenge of ERC tasks.
no code implementations • 3 Jun 2023 • Fusheng Yu, XiaoPing Wang, Jiang Li, Shaojin Wu, Junjie Zhang, Zhigang Zeng
However, limited availability of high-quality datasets has hindered the development of deep learning methods for safety clothing and helmet detection.
1 code implementation • 20 Mar 2023 • Yingjian Liu, Jiang Li, XiaoPing Wang, Zhigang Zeng
Emotion Recognition in Conversation (ERC) has attracted growing attention in recent years as a result of the advancement and implementation of human-computer interface technologies.
Ranked #5 on Emotion Recognition in Conversation on EmoryNLP
no code implementations • 13 Dec 2022 • Guoqing Lv, Jiang Li, XiaoPing Wang, Zhigang Zeng
We separately encode the last utterance and fuse it with the entire dialogue through the multi-head attention based intention fusion module to capture the speaker's intention.
1 code implementation • 6 Jul 2022 • Jiang Li, XiaoPing Wang, Guoqing Lv, Zhigang Zeng
In multimodal ERC, GNNs are capable of extracting both long-distance contextual information and inter-modal interactive information.
Ranked #19 on Emotion Recognition in Conversation on IEMOCAP
Emotion Classification Emotion Recognition in Conversation +1
1 code implementation • 10 Jan 2022 • Xiaobin Fan, XiaoPing Wang, Kai Lu, Lei Xue, Jinjing Zhao
Research by Fu et al. shows that the algorithms based on Monotonic Search Network (MSNET), such as NSG and NSSG, have achieved the state-of-the-art search performance in efficiency.
no code implementations • 10 Jan 2022 • Jiahao Zheng, Sen Zhang, XiaoPing Wang, Zhigang Zeng
Multimodal sentiment analysis (MSA) is a fundamental complex research problem due to the heterogeneity gap between different modalities and the ambiguity of human emotional expression.
no code implementations • 4 Mar 2021 • Feng Ye, Zachary Morgan, Wei Tian, Songxue Chi, XiaoPing Wang, Michael E. Manley, David Parker, Mojammel A. Khan, J. F. Mitchell, Randy Fishman
Despite the $J_{{\rm eff}}=1/2$ moments, the spin Hamiltonian is denominated by a large in-plane anisotropy $K_z \sim -1$ meV.
Strongly Correlated Electrons Materials Science
no code implementations • 8 Sep 2019 • Xugang Wu, XiaoPing Wang, Xu Zhou, Songlei Jian
On this basis, we formulate the adversarial generation problem and propose an end-to-end pipeline to generate a perturbed texture map for the 3D object that causes the trackers to fail.