Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities.
Ranked #11 on Entity Linking on AIDA-CoNLL
The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar.
Despite recent advances in natural language processing, many statistical models for processing text perform extremely poorly under domain shift.
Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks.
Ontology alignment is the task of identifying semantically equivalent entities from two given ontologies.
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Ranked #4 on Citation Intent Classification on ACL-ARC (using extra training data)
Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading.