no code implementations • Findings (ACL) 2022 • Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, Julia Rozanova, Andre Freitas
The application of Natural Language Inference (NLI) methods over large textual corpora can facilitate scientific discovery, reducing the gap between current research and the available large-scale scientific knowledge.
1 code implementation • NAACL (TextGraphs) 2021 • Peter Jansen, Mokanarangan Thayaparan, Marco Valentino, Dmitry Ustalov
While previous editions of this shared task aimed to evaluate explanatory completeness – finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations.
1 code implementation • COLING (TextGraphs) 2022 • Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, André Freitas, Dmitry Ustalov
In this summary paper, we present the results of the 1st edition of the NLPS task, providing a description of the evaluation data, and the participating systems.
no code implementations • 3 Apr 2024 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).
no code implementations • 5 Aug 2022 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
Integer Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language.
1 code implementation • 25 Jul 2021 • Marco Valentino, Mokanarangan Thayaparan, Deborah Ferreira, André Freitas
Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference.
no code implementations • ACL (NALOMA, IWCS) 2021 • Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, André Freitas
Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity.
no code implementations • 7 May 2021 • Mokanarangan Thayaparan, Marco Valentino, Deborah Ferreira, Julia Rozanova, André Freitas
This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization.
1 code implementation • IWCS (ACL) 2021 • Guy Marshall, Mokanarangan Thayaparan, Philip Osborne, Andre Freitas
This paper explores the topic of transportability, as a sub-area of generalisability.
1 code implementation • ACL 2021 • Deborah Ferreira, Julia Rozanova, Mokanarangan Thayaparan, Marco Valentino, André Freitas
Probing (or diagnostic classification) has become a popular strategy for investigating whether a given set of intermediate features is present in the representations of neural models.
no code implementations • 25 Oct 2020 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.
no code implementations • 1 Oct 2020 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).
no code implementations • COLING 2022 • Marco Valentino, Mokanarangan Thayaparan, André Freitas
Most of the contemporary approaches for multi-hop Natural Language Inference (NLI) construct explanations considering each test case in isolation.
1 code implementation • EACL 2021 • Marco Valentino, Mokanarangan Thayaparan, André Freitas
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA).
no code implementations • WS 2019 • Mokanarangan Thayaparan, Marco Valentino, Viktor Schlegel, Andre Freitas
Recent advances in reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text.