About

Sentiment analysis is the task of classifying the polarity of a given text.

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Libraries

Subtasks

Datasets

Greatest papers with code

Adversarial Training Methods for Semi-Supervised Text Classification

25 May 2016tensorflow/models

Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.

SEMI SUPERVISED TEXT CLASSIFICATION SEMI-SUPERVISED TEXT CLASSIFICATION SENTIMENT ANALYSIS WORD EMBEDDINGS

Effective Use of Word Order for Text Categorization with Convolutional Neural Networks

HLT 2015 tensorflow/models

Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data.

SENTIMENT ANALYSIS

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

NeurIPS 2019 huggingface/transformers

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

KNOWLEDGE DISTILLATION LANGUAGE MODELLING LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TRANSFER LEARNING

XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 huggingface/transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING HUMOR DETECTION LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION QUESTION ANSWERING READING COMPREHENSION SENTIMENT ANALYSIS TEXT CLASSIFICATION

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

ICLR 2020 google-research/bert

Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.

KNOWLEDGE DISTILLATION LANGUAGE MODELLING MODEL COMPRESSION SENTIMENT ANALYSIS