|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short).
We compute the representation for all the sentences in advance and leverage k-d tree to accelerate the speed.
To tackle this issue, we propose a multi-passage BERT model to globally normalize answer scores across all passages of the same question, and this change enables our QA model find better answers by utilizing more passages.
We propose a novel approach for complex KGQA that uses unsupervised message passing, which propagates confidence scores obtained by parsing an input question and matching terms in the knowledge graph to a set of possible answers.
In this paper, we develop a pre-training approach for incorporating commonsense knowledge into language representation models.
Clinical text structuring is a critical and fundamental task for clinical research.
Text adventure games, in which players must make sense of the world through text descriptions and declare actions through text descriptions, provide a stepping stone toward grounding action in language.