Sentiment Analysis

1293 papers with code • 39 benchmarks • 93 datasets

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Libraries

Use these libraries to find Sentiment Analysis models and implementations
5 papers
2,548
See all 6 libraries.

Most implemented papers

EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks

jasonwei20/eda_nlp IJCNLP 2019

We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.

ERNIE: Enhanced Representation through Knowledge Integration

PaddlePaddle/PaddleNLP 19 Apr 2019

We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration).

FNet: Mixing Tokens with Fourier Transforms

google-research/google-research NAACL 2022

At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs).

Ask Me Anything: Dynamic Memory Networks for Natural Language Processing

DongjunLee/dmn-tensorflow 24 Jun 2015

Most tasks in natural language processing can be cast into question answering (QA) problems over language input.

A C-LSTM Neural Network for Text Classification

zackhy/TextClassification 27 Nov 2015

In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification.

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

PaddlePaddle/PaddleNLP 31 Aug 2019

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

Graph Convolutional Networks for Text Classification

yao8839836/text_gcn 15 Sep 2018

We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus.

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Quasi-Recurrent Neural Networks

salesforce/pytorch-qrnn 5 Nov 2016

Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences.

Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence

HSLCY/ABSA-BERT-pair NAACL 2019

Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA).