Event Causality Recognition Exploiting Multiple Annotators' Judgments and Background Knowledge

We propose new BERT-based methods for recognizing event causality such as {``}smoke cigarettes{''} {--}{\textgreater} {``}die of lung cancer{''} written in web texts. In our methods, we grasp each annotator{'}s policy by training multiple classifiers, each of which predicts the labels given by a single annotator, and combine the resulting classifiers{'} outputs to predict the final labels determined by majority vote. Furthermore, we investigate the effect of supplying background knowledge to our classifiers. Since BERT models are pre-trained with a large corpus, some sort of background knowledge for event causality may be learned during pre-training. Our experiments with a Japanese dataset suggest that this is actually the case: Performance improved when we pre-trained the BERT models with web texts containing a large number of event causalities instead of Wikipedia articles or randomly sampled web texts. However, this effect was limited. Therefore, we further improved performance by simply adding texts related to an input causality candidate as background knowledge to the input of the BERT models. We believe these findings indicate a promising future research direction.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods