Subjectivity Analysis
18 papers with code • 2 benchmarks • 2 datasets
A related task to sentiment analysis is the subjectivity analysis with the goal of labeling an opinion as either subjective or objective.
Most implemented papers
Universal Sentence Encoder
For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
We present EDA: easy data augmentation techniques for boosting performance on text classification tasks.
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos
This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinion-level Sentiment Intensity dataset (MOSI).
All-but-the-Top: Simple and Effective Postprocessing for Word Representations
The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and { text classification}) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.
Investigating Capsule Networks with Dynamic Routing for Text Classification
In this study, we explore capsule networks with dynamic routing for text classification.
The Evolution of Sentiment Analysis - A Review of Research Topics, Venues, and Top Cited Papers
Sentiment analysis is one of the fastest growing research areas in computer science, making it challenging to keep track of all the activities in the area.
Learning to Generate Reviews and Discovering Sentiment
We explore the properties of byte-level recurrent language models.
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
In this paper, we propose the Gated Multimodal Embedding LSTM with Temporal Attention (GME-LSTM(A)) model that is composed of 2 modules.
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations.
Entailment as Few-Shot Learner
Large pre-trained language models (LMs) have demonstrated remarkable ability as few-shot learners.