Text classification is the task of assigning a sentence or document an appropriate category. The categories depend on the chosen dataset and can range from topics.
( Image credit: Text Classification Algorithms: A Survey )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
A new framework TensorGCN (tensor graph convolutional networks), is presented for this task.
This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data.
A recently introduced text classifier, called SS3, has obtained state-of-the-art performance on the CLEF's eRisk tasks.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
The use of large pretrained neural networks to create contextualized word embeddings has drastically improved performance on several natural language processing (NLP) tasks.
Our model achieves high accuracy for classification on this dataset and outperforms the previous model for multilingual text classification, highlighting language independence of McM.
Our results demonstrate that the f-differential privacy framework allows for a new privacy analysis that improves on the prior analysis , which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget.
Pre-trained word embeddings encode general word semantics and lexical regularities of natural language, and have proven useful across many NLP tasks, including word sense disambiguation, machine translation, and sentiment analysis, to name a few.