no code implementations • COLING 2022 • Xu Zhang, Zejie Liu, Yanzheng Xiang, Deyu Zhou
However such way might not fully explore the knowledge in PTMs as it is constrained by the difficulty of the task.
no code implementations • Findings (ACL) 2022 • Tao Wang, Linhai Zhang, Chenchen Ye, Junxi Liu, Deyu Zhou
Medical code prediction from clinical notes aims at automatically associating medical codes with the clinical notes.
no code implementations • ACL 2022 • Linhai Zhang, Xuemeng Hu, Boyu Wang, Deyu Zhou, Qian-Wen Zhang, Yunbo Cao
Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling.
no code implementations • EMNLP 2021 • Chenchen Ye, Linhai Zhang, Yulan He, Deyu Zhou, Jie Wu
The other is label heterogeneous graph, which is constructed based on both the labels’ hierarchy and their statistical dependencies.
no code implementations • EMNLP 2021 • Deyu Zhou, Jianan Wang, Linhai Zhang, Yulan He
Implicit sentiment analysis, aiming at detecting the sentiment of a sentence without sentiment words, has become an attractive research topic in recent years.
no code implementations • Findings (EMNLP) 2021 • Linhai Zhang, Deyu Zhou, Chao Lin, Yulan He
Therefore, in this paper, multi-hop relation detection is considered as a multi-label learning problem.
no code implementations • COLING 2022 • Linhai Zhang, Deyu Zhou
Due to their incompleteness, a fundamental task for KGs, which is known as Knowledge Graph Completion (KGC), is to perform link prediction and infer new facts based on the known facts.
no code implementations • Findings (EMNLP) 2021 • Deyu Zhou, Yanzheng Xiang, Linhai Zhang, Chenchen Ye, Qian-Wen Zhang, Yunbo Cao
However, most of existing approaches only detect one single path to obtain the answer without considering other correct paths, which might affect the final performance.
no code implementations • 12 Mar 2024 • Yanyue Zhang, Pengfei Li, Yilong Lai, Deyu Zhou, Yulan He
In specific, a small size of synthesized negative reviews is obtained by rewriting the positive text via a large language model.
no code implementations • 10 Mar 2024 • Xin Zhang, Linhai Zhang, Deyu Zhou, Guoqiang Xu
Due to the sparsity of user data, sentiment analysis on user reviews in e-commerce platforms often suffers from poor performance, especially when faced with extremely sparse user data or long-tail labels.
1 code implementation • 5 Mar 2024 • Congzhi Zhang, Linhai Zhang, Deyu Zhou
Conventional multi-hop fact verification models are prone to rely on spurious correlations from the annotation artifacts, leading to an obvious performance decline on unbiased datasets.
no code implementations • 5 Mar 2024 • Congzhi Zhang, Linhai Zhang, Deyu Zhou, Guoqiang Xu
In specific, causal intervention is implemented by designing the prompts without accessing the parameters and logits of LLMs. The chain-of-thoughts generated by LLMs are employed as the mediator variable and the causal effect between the input prompt and the output answers is calculated through front-door adjustment to mitigate model biases.
1 code implementation • 2 Mar 2024 • Linhai Zhang, Jialong Wu, Deyu Zhou, Guoqiang Xu
For poor model calibration, we incorporate the regularization method during LoRA training to keep the model from being over-confident, and the Monte-Carlo dropout mechanism is employed to enhance the uncertainty estimation.
1 code implementation • 2 Mar 2024 • Jialong Wu, Linhai Zhang, Deyu Zhou, Guoqiang Xu
However, most of the present debiasing methods focus on single-variable causal inference, which is not suitable for ABSA with two input variables (the target aspect and the review).
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3
no code implementations • 1 Feb 2024 • Qun Ma, Xiao Xue, Deyu Zhou, Xiangning Yu, Donghua Liu, Xuwen Zhang, Zihan Zhao, Yifan Shen, Peilin Ji, Juanjuan Li, Gang Wang, Wanpeng Ma
These agents, known as LLM-based Agent, offer the potential to enhance the anthropomorphism lacking in ABM.
1 code implementation • 23 Oct 2023 • Yingjie Zhu, Jiasheng Si, Yibo Zhao, Haiyang Zhu, Deyu Zhou, Yulan He
Experimental results show that the proposed approach outperforms the SOTA baselines and can generate linguistically diverse counterfactual data without disrupting their logical relationships.
no code implementations • 22 Jul 2023 • Jiasheng Si, Yingjie Zhu, Xingyu Shi, Deyu Zhou, Yulan He
Specifically, with the use of the neural topic model and the language model, the target information is augmented by explainable topic representations.
no code implementations • 30 Jun 2023 • Chenduo Hao, Xu Zhang, Chuanbao Gao, Deyu Zhou
To address this issue, we propose the Clause Feature Correlation Decoupling and Coupling (CFCDC) model, which uses a feature representation decoupling method to separate the SELECT and WHERE clauses at the parameter level.
no code implementations • 16 May 2023 • Jiasheng Si, Yingjie Zhu, Deyu Zhou
The success of deep learning models on multi-hop fact verification has prompted researchers to understand the behavior behind their veracity.
no code implementations • ICCV 2023 • Zhentao Yu, Zixin Yin, Deyu Zhou, Duomin Wang, Finn Wong, Baoyuan Wang
In this paper, we introduce a simple and novel framework for one-shot audio-driven talking head generation.
no code implementations • 2 Dec 2022 • Jiasheng Si, Yingjie Zhu, Deyu Zhou
In specific, GCN is utilized to incorporate the topological interaction information among multiple pieces of evidence for learning evidence representation.
1 code implementation • COLING 2022 • Zeng Yang, Linhai Zhang, Deyu Zhou
Current few-shot NER methods focus on leveraging existing datasets in the rich-resource domains which might fail in a training-from-scratch setting where no source-domain data is used.
1 code implementation • ACL 2021 • Jiasheng Si, Deyu Zhou, Tongzhe Li, Xingyu Shi, Yulan He
To alleviate the above issues, we propose a novel topic-aware evidence reasoning and stance-aware aggregation model for more accurate fact verification, with the following four key properties: 1) checking topical consistency between the claim and evidence; 2) maintaining topical coherence among multiple pieces of evidence; 3) ensuring semantic similarity between the global topic information and the semantic representation of evidence; 4) aggregating evidence based on their implicit stances to the claim.
1 code implementation • ACL 2021 • Lixing Zhu, Gabriele Pergola, Lin Gui, Deyu Zhou, Yulan He
Emotion detection in dialogues is challenging as it often requires the identification of thematic topics underlying a conversation, the relevant commonsense knowledge, and the intricate transition patterns between the affective states.
Ranked #12 on Emotion Recognition in Conversation on DailyDialog
no code implementations • 21 May 2021 • Rui Wang, Deyu Zhou, Yuxuan Xiong, Haiping Huang
Based on the variational auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian in decoder to incorporate word relatedness.
no code implementations • COLING 2020 • Deyu Zhou, Shuangzhi Wu, Qing Wang, Jun Xie, Zhaopeng Tu, Mu Li
Emotion lexicons have been shown effective for emotion classification (Baziotis et al., 2018).
no code implementations • EMNLP 2020 • Xuemeng Hu, Rui Wang, Deyu Zhou, Yuxuan Xiong
ToMCAT employs a generator network to interpret topics and an encoder network to infer document topics.
no code implementations • EMNLP 2020 • Deyu Zhou, Xuemeng Hu, Rui Wang
Graph Neural Networks (GNNs) that capture the relationships between graph nodes via message passing have been a hot research direction in the natural language processing community.
1 code implementation • 11 Aug 2020 • Lixing Zhu, Yulan He, Deyu Zhou
We propose a novel generative model to explore both local and global context for joint learning topics and topic-specific word embeddings.
no code implementations • ACL 2020 • Lixing Zhu, Yulan He, Deyu Zhou
Opinion prediction on Twitter is challenging due to the transient nature of tweet content and neighbourhood context.
1 code implementation • ACL 2020 • Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, Haiyang Xu
Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA).
Ranked #1 on Text Clustering on 20 Newsgroups
no code implementations • IJCNLP 2019 • Yang Yang, Deyu Zhou, Yulan He, Meng Zhang
Unveiling the hidden event information can help to understand how the emotions are evoked and provide explainable results.
no code implementations • 22 Sep 2019 • Mingqi Hu, Deyu Zhou, Yulan He
In this paper, we propose a novel variational generator framework for conditional GANs to catch semantic details for improving the generation quality and diversity.
no code implementations • IJCNLP 2019 • Rui Wang, Deyu Zhou, Yulan He
Experimental results show that our model outperforms the baseline approaches on all the datasets, with more significant improvements observed on the news article dataset where an increase of 15\% is observed in F-measure.
no code implementations • 1 Nov 2018 • Rui Wang, Deyu Zhou, Yulan He
The proposed ATM models topics with Dirichlet prior and employs a generator network to capture the semantic patterns among latent topics.
no code implementations • EMNLP 2018 • Yang Yang, Deyu Zhou, Yulan He
As such, it is crucial to predict and rank multiple relevant emotions by their intensities.
no code implementations • NAACL 2018 • Deyu Zhou, Yang Yang, Yulan He
As such, emotion detection, to predict multiple emotions associated with a given text, can be cast into a multi-label classification problem.
no code implementations • NAACL 2018 • Deyu Zhou, Linsen Guo, Yulan He
To tackle this problem, approaches based on probabilistic graphic models jointly model the generations of events and storylines without the use of annotated data.
no code implementations • EACL 2017 • Deyu Zhou, Xuan Zhang, Yulan He
To extract structured representations of newsworthy events from Twitter, unsupervised models typically assume that tweets involving the same named entities and expressed using similar words are likely to belong to the same event.