In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance.
BERT, a neural network-based language model pre-trained on large corpora, is a breakthrough in natural language processing, significantly outperforming previous state-of-the-art models in numerous tasks.
Additionally, we construct a brand new dataset: the TinyRel-CM dataset, a few-shot relation classification dataset in health domain with limited training data and challenging relation classes.
Our results offer recommendations to the stakeholders of digital libraries for selecting the appropriate technique to build a structured knowledge-based system for the ease of scholarly information organization.
Implicit discourse relations are not only more challenging to classify, but also to annotate, than their explicit counterparts.
We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.
Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation.
In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision.