Sentiment Analysis
1277 papers with code • 43 benchmarks • 92 datasets
Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.
Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.
More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.
Further readings:
Libraries
Use these libraries to find Sentiment Analysis models and implementationsDatasets
Subtasks
- Aspect-Based Sentiment Analysis (ABSA)
- Multimodal Sentiment Analysis
- Aspect Sentiment Triplet Extraction
- Twitter Sentiment Analysis
- Twitter Sentiment Analysis
- Aspect Term Extraction and Sentiment Classification
- target-oriented opinion words extraction
- Persian Sentiment Analysis
- Arabic Sentiment Analysis
- Aspect-oriented Opinion Extraction
- Fine-Grained Opinion Analysis
- Aspect-Sentiment-Opinion Triplet Extraction
- Aspect-Category-Opinion-Sentiment Quadruple Extraction
- Vietnamese Aspect-Based Sentiment Analysis
- Vietnamese Sentiment Analysis
- Pcl Detection
Most implemented papers
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.
Character-level Convolutional Networks for Text Classification
This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification.
Distributed Representations of Sentences and Documents
Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models.
Universal Sentence Encoder
For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.
XLNet: Generalized Autoregressive Pretraining for Language Understanding
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.
Pay Attention to MLPs
Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years.
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks.
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout.
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.