The application of Natural Language Inference (NLI) methods over large textual corpora can facilitate scientific discovery, reducing the gap between current research and the available large-scale scientific knowledge.
While previous editions of this shared task aimed to evaluate explanatory completeness – finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations.
Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference.
Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity.
This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization.
This paper explores the topic of transportability, as a sub-area of generalisability.
Probing (or diagnostic classification) has become a popular strategy for investigating whether a given set of intermediate features is present in the representations of neural models.
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).
Existing accounts of explanation emphasise the role of prior experience in the solution of new problems.
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA).
Recent advances in reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text.