This paper studies the bias problem of multi-hop question answering models, of answering correctly without correct reasoning.
Graph neural networks (GNNs) have been widely used in representation learning on graphs and achieved superior performance in tasks such as node classification.
Existing machine reading comprehension models are reported to be brittle for adversarially perturbed questions when optimizing only for accuracy, which led to the creation of new reading comprehension benchmarks, such as SQuAD 2. 0 which contains such type of questions.
This paper studies the problem of supporting question answering in a new language with limited training resources.
Here we describe a new NL2pSQL task to generate pSQL codes from natural language questions on under-specified database issues, NL2pSQL.
This paper studies the problem of non-factoid question answering, where the answer may span over multiple sentences.
We find the novelty is not a singular concept, and thus inherently lacks of ground truth annotations with cross-annotator agreement, which is a major obstacle in evaluating these models.
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
The performance of text classification has improved tremendously using intelligently engineered neural-based models, especially those injecting categorical metadata as additional information, e. g., using user/product information for sentiment classification.
Ranked #2 on Sentiment Analysis on User and product information (Yelp 2013 (Acc) metric)
Question answering (QA) extracting answers from text to the given question in natural language, has been actively studied and existing models have shown a promise of outperforming human performance when trained and evaluated with SQuAD dataset.
Thus, we aim to eliminate these requirements and solve the sense granularity problem by proposing AutoSense, a latent variable model based on two observations: (1) senses are represented as a distribution over topics, and (2) senses generate pairings between the target word and its neighboring word.
Ranked #2 on Word Sense Induction on SemEval 2010 WSI
The same question has not been asked in the table question answering (TableQA) task, where we are tasked to answer a query given a table.
To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary.
Ranked #17 on Text Summarization on GigaWord
The use of user/product information in sentiment analysis is important, especially for cold-start users/products, whose number of reviews are very limited.
Ranked #2 on Sentiment Analysis on User and product information
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.
Besides providing the relevant information, amusing users has been an important role of the web.
This paper provides a query processing method based on the relevance models between entity sets and concepts.