Pretraining Sentiment Classifiers with Unlabeled Dialog Data

ACL 2018 Toru ShimizuNobuyuki ShimizuHayato Kobayashi

The huge cost of creating labeled training data is a common problem for supervised learning tasks such as sentiment classification. Recent studies showed that pretraining with unlabeled data via a language model can improve the performance of classification models... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.