To train the model efficiently on noisy data, we propose a self-adversarial learning method and a cascade training method.
With the prevalence of deep learning based embedding approaches, recommender systems have become a proven and indispensable tool in various information filtering applications.
Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions.
Ranked #1 on Entity Typing on Ontonotes v5 (English) (using extra training data)
In this work, we show that this paradigm might inherit the adversarial vulnerability of the centralized neural network, i. e., it has deteriorated performance on adversarial examples when the model is deployed.
A natural language interface (NLI) to databases is an interface that translates a natural language question to a structured query that is executable by database management systems (DBMS).
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database.
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.
Recognizing metaphors and identifying the source-target mappings is an important task as metaphorical text poses a big challenge for machine reading.