Recent advancements in open-domain question answering (ODQA), i. e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.
A lot of progress has been made to improve question answering (QA) in recent years, but the special problem of QA over narrative book stories has not been explored in-depth.
While large-scale pretraining has achieved great success in many NLP tasks, it has not been fully studied whether external linguistic knowledge can improve data-driven models.
Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.
In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment.