Sentiment Analysis
1293 papers with code • 39 benchmarks • 93 datasets
Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.
Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.
More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.
Further readings:
Libraries
Use these libraries to find Sentiment Analysis models and implementationsDatasets
Subtasks
- Aspect-Based Sentiment Analysis (ABSA)
- Multimodal Sentiment Analysis
- Aspect Sentiment Triplet Extraction
- Twitter Sentiment Analysis
- Twitter Sentiment Analysis
- Aspect Term Extraction and Sentiment Classification
- target-oriented opinion words extraction
- Arabic Sentiment Analysis
- Persian Sentiment Analysis
- Aspect-oriented Opinion Extraction
- Fine-Grained Opinion Analysis
- Aspect-Sentiment-Opinion Triplet Extraction
- Aspect-Category-Opinion-Sentiment Quadruple Extraction
- Vietnamese Aspect-Based Sentiment Analysis
- Vietnamese Sentiment Analysis
- Pcl Detection
Latest papers
Sample Design Engineering: An Empirical Study of What Makes Good Downstream Fine-Tuning Samples for LLMs
In the burgeoning field of Large Language Models (LLMs) like ChatGPT and LLaMA, Prompt Engineering (PE) is renowned for boosting zero-shot or in-context learning (ICL) through prompt modifications.
Cooperative Sentiment Agents for Multimodal Sentiment Analysis
In this paper, we propose a new Multimodal Representation Learning (MRL) method for Multimodal Sentiment Analysis (MSA), which facilitates the adaptive interaction between modalities through Cooperative Sentiment Agents, named Co-SA.
Large Language Models in Targeted Sentiment Analysis
Reasoning capabilities of the fine-tuned Flan-T5 models with THoR achieve at least 5% increment with the base-size model compared to the results of the zero-shot experiment.
On the Causal Nature of Sentiment Analysis
Sentiment analysis (SA) aims to identify the sentiment expressed in a text, such as a product review.
ArSen-20: A New Benchmark for Arabic Sentiment Detection
Sentiment detection remains a pivotal task in natural language processing, yet its development in Arabic lags due to a scarcity of training materials compared to English.
EcoVerse: An Annotated Twitter Dataset for Eco-Relevance Classification, Environmental Impact Analysis, and Stance Detection
Anthropogenic ecological crisis constitutes a significant challenge that all within the academy must urgently face, including the Natural Language Processing (NLP) community.
Deciphering Political Entity Sentiment in News with Large Language Models: Zero-Shot and Few-Shot Strategies
Employing a chain-of-thought (COT) approach augmented with rationale in few-shot in-context learning, we assess whether this method enhances sentiment prediction accuracy.
SentiCSE: A Sentiment-aware Contrastive Sentence Embedding Framework with Sentiment-guided Textual Similarity
However, they neglect to evaluate the quality of their constructed sentiment representations; they just focus on improving the fine-tuning performance, which overshadows the representation quality.
KazSAnDRA: Kazakh Sentiment Analysis Dataset of Reviews and Attitudes
This paper presents KazSAnDRA, a dataset developed for Kazakh sentiment analysis that is the first and largest publicly available dataset of its kind.
LlamBERT: Large-scale low-cost data annotation in NLP
Large Language Models (LLMs), such as GPT-4 and Llama 2, show remarkable proficiency in a wide range of natural language processing (NLP) tasks.