Wikipedia is a huge opportunity for machine learning, being the largest semi-structured base of knowledge available.
Definition modeling includes acquiring word embeddings from dictionary definitions and generating definitions of words.
Wikipedia is a great source of general world knowledge which can guide NLP models better understand their motivation to make predictions.
This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.
Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.
To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena.
The current state-of-the-art (SOTA) model for FG-NER relies heavily on manual efforts for building a dictionary and designing hand-crafted features.
From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiple-choice questions tend to require a broader range of reasoning skills than answer extraction and description questions.
However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages.
For unanswered questions that do not have a past resolved question with a shared need, we propose to use the best answer to a past resolved question with similar needs.