UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models

We describe UBC-NLP contribution to IEST-2018, focused at learning implicit emotion in Twitter data. Among the 30 participating teams, our system ranked the 4th (with 69.3{\%} \textit{F}-score). Post competition, we were able to score slightly higher than the 3rd ranking system (reaching 70.7{\%}). Our system is trained on top of a pre-trained language model (LM), fine-tuned on the data provided by the task organizers. Our best results are acquired by an average of an ensemble of language models. We also offer an analysis of system performance and the impact of training data size on the task. For example, we show that training our best model for only one epoch with {\textless} 40{\%} of the data enables better performance than the baseline reported by Klinger et al. (2018) for the task.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here