We create a subset of the NQ data, Factual Questions (FQ), where the questions have evidence in the KB in the form of paths that link question entities to answer entities but still must be answered using text, to facilitate further research into KB integration methods.
The rise in the usage of social media has placed it in a central position for news dissemination and consumption.
Keyphrase extraction aims at automatically extracting a list of "important" phrases which represent the key concepts in a document.
To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential.
Modern summarization models generate highly fluent but often factually unreliable outputs.
Dense retrieval has been shown to be effective for retrieving relevant documents for Open Domain QA, surpassing popular sparse retrieval methods like BM25.
We introduce SelfExplain, a novel self-explaining model that explains a text classifier's predictions using phrase-based concepts.
To this end, we propose incorporating latent and explicit dependencies across sentences in the source document into end-to-end single-document summarization models.
In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.