Exploiting BERT for End-to-End Aspect-based Sentiment Analysis

WS 2019  ·  Xin Li, Lidong Bing, Wenxuan Zhang, Wai Lam ·

In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task. Specifically, we build a series of simple yet insightful neural baselines to deal with E2E-ABSA. The experimental results show that even with a simple linear classification layer, our BERT-based architecture can outperform state-of-the-art works. Besides, we also standardize the comparative study by consistently utilizing a hold-out validation dataset for model selection, which is largely ignored by previous works. Therefore, our work can serve as a BERT-based benchmark for E2E-ABSA.

PDF Abstract WS 2019 PDF WS 2019 Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Aspect-Based Sentiment Analysis SemEval 2014 Task 4 Laptop BERT-E2E-ABSA F1 61.12 # 3
Sentiment Analysis SemEval 2014 Task 4 Subtask 1+2 BERT-E2E-ABSA F1 61.12 # 4
Aspect-Based Sentiment Analysis SemEval 2014 Task 4 Subtask 1+2 BERT-E2E-ABSA F1 61.12 # 4

Methods