This paper introduces the "Shopping Queries Dataset", a large dataset of difficult Amazon search queries and results, publicly released with the aim of fostering research in improving the quality of search results.
Our best metric for domainness shows a strong correlation with the human-judged precision, representing a reasonable automatic alternative to assess the quality of domain-specific corpora.
In the context of investigative journalism, we address the problem of automatically identifying which claims in a given document are most worthy and should be prioritized for fact-checking.
We explore the applicability of machine translation evaluation (MTE) methods to a very different problem: answer ranking in community Question Answering.
This paper describes the SemEval--2016 Task 3 on Community Question Answering, which we offered in English and Arabic.
We describe SemEval-2017 Task 3 on Community Question Answering.
Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e. g., the exploitation of the interaction between users and the structure of related posts.
Community question answering, a recent evolution of question answering in the Web context, allows a user to quickly consult the opinion of a number of people on a particular topic, thus taking advantage of the wisdom of the crowd.
We present a system for answering questions based on the full text of books (BookQA), which first selects book passages given a question at hand, and then uses a memory network to reason and predict an answer.
We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates.
We study the problem of automatic fact-checking, paying special attention to the impact of contextual and discourse information.
In this paper, we investigate the representations learned at different layers of NMT encoders.
We present a framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a pair of hypotheses, given the reference translation.
In this article, we explore the potential of using sentence-level discourse structure for machine translation evaluation.
We address the problem of cross-language adaptation for question-question similarity reranking in community question answering, with the objective to port a system trained on one input language to another input language given labeled training data for the first language and only unlabeled data for the second language.