Sentiment Analysis
1425 papers with code • 40 benchmarks • 99 datasets
Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.
Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.
More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.
Further readings:
Libraries
Use these libraries to find Sentiment Analysis models and implementationsDatasets
Subtasks
- Aspect-Based Sentiment Analysis (ABSA)
- Multimodal Sentiment Analysis
- Aspect Sentiment Triplet Extraction
- Twitter Sentiment Analysis
- Twitter Sentiment Analysis
- Aspect Term Extraction and Sentiment Classification
- Arabic Sentiment Analysis
- Persian Sentiment Analysis
- target-oriented opinion words extraction
- Fine-Grained Opinion Analysis
- Aspect-oriented Opinion Extraction
- Aspect-Sentiment-Opinion Triplet Extraction
- Aspect-Category-Opinion-Sentiment Quadruple Extraction
- Vietnamese Aspect-Based Sentiment Analysis
- Vietnamese Sentiment Analysis
- Pcl Detection
Most implemented papers
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Convolutional Neural Networks for Sentence Classification
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.
Universal Language Model Fine-tuning for Text Classification
Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Bag of Tricks for Efficient Text Classification
This paper explores a simple and efficient baseline for text classification.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
A Structured Self-attentive Sentence Embedding
This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention.
Deep contextualized word representations
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.
Domain-Adversarial Training of Neural Networks
Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.