|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Millions of online discussions are generated everyday on social media platforms.
However, existing generative models do not make full use of network structures because they are largely dependent on topic modeling of documents.
When applying a topic model, a relatively standard pre-processing step is to first build a vocabulary of frequent words.
From the evaluation, the proposed approach has a comparable performance in terms of topic coherences with LDA implemented in MapReduce framework.
How well can hate speech concept be abstracted in order to inform automatic classification in codeswitched texts by machine learning classifiers?
The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents.
Topic models are becoming increasingly relevant probabilistic models for dimensionality reduction of text data, inferring topics that capture meaningful themes of frequently co-occurring terms.