Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling.
Medical code prediction from clinical notes aims at automatically associating medical codes with the clinical notes.
Implicit sentiment analysis, aiming at detecting the sentiment of a sentence without sentiment words, has become an attractive research topic in recent years.
The other is label heterogeneous graph, which is constructed based on both the labels’ hierarchy and their statistical dependencies.
However, most of existing approaches only detect one single path to obtain the answer without considering other correct paths, which might affect the final performance.
Therefore, in this paper, multi-hop relation detection is considered as a multi-label learning problem.
Emotion detection in dialogues is challenging as it often requires the identification of thematic topics underlying a conversation, the relevant commonsense knowledge, and the intricate transition patterns between the affective states.
Ranked #2 on Emotion Recognition in Conversation on EmoryNLP
To alleviate the above issues, we propose a novel topic-aware evidence reasoning and stance-aware aggregation model for more accurate fact verification, with the following four key properties: 1) checking topical consistency between the claim and evidence; 2) maintaining topical coherence among multiple pieces of evidence; 3) ensuring semantic similarity between the global topic information and the semantic representation of evidence; 4) aggregating evidence based on their implicit stances to the claim.
Based on the variational auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian in decoder to incorporate word relatedness.
Emotion lexicons have been shown effective for emotion classification (Baziotis et al., 2018).
Graph Neural Networks (GNNs) that capture the relationships between graph nodes via message passing have been a hot research direction in the natural language processing community.
We propose a novel generative model to explore both local and global context for joint learning topics and topic-specific word embeddings.
Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA).
Ranked #1 on Text Clustering on 20 Newsgroups
In this paper, we propose a novel variational generator framework for conditional GANs to catch semantic details for improving the generation quality and diversity.
Experimental results show that our model outperforms the baseline approaches on all the datasets, with more significant improvements observed on the news article dataset where an increase of 15\% is observed in F-measure.
As such, it is crucial to predict and rank multiple relevant emotions by their intensities.
As such, emotion detection, to predict multiple emotions associated with a given text, can be cast into a multi-label classification problem.
To tackle this problem, approaches based on probabilistic graphic models jointly model the generations of events and storylines without the use of annotated data.
To extract structured representations of newsworthy events from Twitter, unsupervised models typically assume that tweets involving the same named entities and expressed using similar words are likely to belong to the same event.