no code implementations • Findings (ACL) 2022 • Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, Julia Rozanova, Andre Freitas
The application of Natural Language Inference (NLI) methods over large textual corpora can facilitate scientific discovery, reducing the gap between current research and the available large-scale scientific knowledge.
1 code implementation • NAACL (TextGraphs) 2021 • Peter Jansen, Mokanarangan Thayaparan, Marco Valentino, Dmitry Ustalov
While previous editions of this shared task aimed to evaluate explanatory completeness – finding a set of facts that form a complete inference chain, without gaps, to arrive from question to correct answer, this 2021 instantiation concentrates on the subtask of determining relevance in large multi-hop explanations.
1 code implementation • COLING (TextGraphs) 2022 • Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, André Freitas, Dmitry Ustalov
In this summary paper, we present the results of the 1st edition of the NLPS task, providing a description of the evaluation data, and the participating systems.
no code implementations • 30 Oct 2024 • Leonardo Ranaldi, Marco Valentino, Andrè Freitas
Retrieval-augmented generation (RAG) has emerged as a critical mechanism in contemporary NLP to support Large Language Models(LLMs) in systematically accessing richer factual context.
no code implementations • 18 Oct 2024 • Magdalena Wysocka, Danilo S. Carvalho, Oskar Wysocki, Marco Valentino, Andre Freitas
Syllogistic reasoning is crucial for Natural Language Inference (NLI).
no code implementations • 5 Oct 2024 • Marco Valentino, André Freitas
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation, and representing one of the media supporting scientific discovery and communication.
no code implementations • 16 Aug 2024 • Geonhee Kim, Marco Valentino, André Freitas
Overall, our findings suggest that LMs indeed learn transferable content-independent reasoning mechanisms, but that, at the same time, such mechanisms do not involve generalisable and abstract logical primitives, being susceptible to contamination by the same world knowledge acquired during pre-training.
1 code implementation • 2 May 2024 • Xin Quan, Marco Valentino, Louise A. Dennis, André Freitas
Natural language explanations represent a proxy for evaluating explanation-based and multi-step Natural Language Inference (NLI) models.
no code implementations • 7 Apr 2024 • Mael Jullien, Marco Valentino, André Freitas
Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for ClinicalTrials.
no code implementations • 3 Apr 2024 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).
no code implementations • 3 Apr 2024 • Julia Rozanova, Marco Valentino, André Freitas
Rigorous evaluation of the causal effects of semantic features on language model predictions can be hard to achieve for natural language reasoning problems.
no code implementations • 16 Feb 2024 • Dhairya Dalal, Marco Valentino, André Freitas, Paul Buitelaar
While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood.
1 code implementation • 1 Feb 2024 • Xin Quan, Marco Valentino, Louise A. Dennis, André Freitas
An increasing amount of research in Natural Language Inference (NLI) focuses on the application and evaluation of Large Language Models (LLMs) and their reasoning capabilities.
1 code implementation • 1 Feb 2024 • Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas
Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.
1 code implementation • 14 Nov 2023 • Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.
1 code implementation • 2 Nov 2023 • Marco Valentino, Jordan Meadows, Lan Zhang, André Freitas
To this end, we introduce different multi-operational representation paradigms, modelling mathematical operations as explicit geometric transformations.
no code implementations • 19 Jul 2023 • Jordan Meadows, Marco Valentino, Andre Freitas
This paper investigates how hallucination rates in Large Language Models (LLMs) may be controlled via a symbolic data generation framework, exploring a fundamental relationship between the rate of certain mathematical errors and types of input intervention.
no code implementations • 21 May 2023 • Jordan Meadows, Marco Valentino, Damien Teney, Andre Freitas
This paper proposes a methodology for generating and perturbing detailed derivations of equations at scale, aided by a symbolic engine, to evaluate the generalisability of Transformers to out-of-distribution mathematical reasoning problems.
no code implementations • 15 May 2023 • Julia Rozanova, Marco Valentino, Andre Freitas
Rigorous evaluation of the causal effects of semantic features on language model predictions can be hard to achieve for natural language reasoning problems.
1 code implementation • 12 May 2023 • Marco Valentino, Danilo S. Carvalho, André Freitas
Natural language definitions possess a recursive, self-explanatory semantic structure that can support representation learning methods able to preserve explicit conceptual relations and constraints in the latent space.
2 code implementations • 5 May 2023 • Maël Jullien, Marco Valentino, Hannah Frost, Paul O'Regan, Donal Landers, André Freitas
In this work, we present a novel resource to advance research on NLI for reasoning on CTRs.
no code implementations • 4 May 2023 • Maël Jullien, Marco Valentino, Hannah Frost, Paul O'Regan, Donal Landers, André Freitas
This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data.
no code implementations • 20 Apr 2023 • Julia Rozanova, Marco Valentino, Lucas Cordeiro, Andre Freitas
Probing strategies have been shown to detect the presence of various linguistic features in large language models; in particular, semantic features intermediate to the "natural logic" fragment of the Natural Language Inference task (NLI).
Natural Language Inference
Vocal Bursts Intensity Prediction
no code implementations • 5 Aug 2022 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
Integer Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language.
no code implementations • 3 May 2022 • Marco Valentino, André Freitas
A fundamental research goal for Explainable AI (XAI) is to build models that are capable of reasoning through the generation of natural language explanations.
no code implementations • 25 Jan 2022 • Mael Jullien, Marco Valentino, Andre Freitas
With the methodological support of probing (or diagnostic classification), recent studies have demonstrated that Transformers encode syntactic and semantic information to some extent.
1 code implementation • 15 Dec 2021 • Julia Rozanova, Deborah Ferreira, Marco Valentino, Mokanrarangan Thayaparan, Andre Freitas
In the interest of interpreting neural NLI models and their reasoning strategies, we carry out a systematic probing study which investigates whether these models capture the crucial semantic features central to natural logic: monotonicity and concept inclusion.
1 code implementation • 25 Jul 2021 • Marco Valentino, Mokanarangan Thayaparan, Deborah Ferreira, André Freitas
Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference.
no code implementations • ACL (NALOMA, IWCS) 2021 • Julia Rozanova, Deborah Ferreira, Mokanarangan Thayaparan, Marco Valentino, André Freitas
Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity.
no code implementations • IWCS (ACL) 2021 • Zili Zhou, Marco Valentino, Donal Landers, Andre Freitas
This paper describes N-XKT (Neural encoding based on eXplanatory Knowledge Transfer), a novel method for the automatic transfer of explanatory knowledge through neural encoding mechanisms.
no code implementations • 7 May 2021 • Mokanarangan Thayaparan, Marco Valentino, Deborah Ferreira, Julia Rozanova, André Freitas
This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization.
no code implementations • IWCS (ACL) 2021 • Marco Valentino, Ian Pratt-Hartmann, André Freitas
An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities.
1 code implementation • ACL 2021 • Deborah Ferreira, Julia Rozanova, Mokanarangan Thayaparan, Marco Valentino, André Freitas
Probing (or diagnostic classification) has become a popular strategy for investigating whether a given set of intermediate features is present in the representations of neural models.
no code implementations • 25 Oct 2020 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.
no code implementations • 1 Oct 2020 • Mokanarangan Thayaparan, Marco Valentino, André Freitas
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).
no code implementations • COLING 2022 • Marco Valentino, Mokanarangan Thayaparan, André Freitas
Most of the contemporary approaches for multi-hop Natural Language Inference (NLI) construct explanations considering each test case in isolation.
1 code implementation • EACL 2021 • Marco Valentino, Mokanarangan Thayaparan, André Freitas
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA).
1 code implementation • LREC 2020 • Viktor Schlegel, Marco Valentino, André Freitas, Goran Nenadic, Riza Batista-Navarro
Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text.
no code implementations • WS 2019 • Mokanarangan Thayaparan, Marco Valentino, Viktor Schlegel, Andre Freitas
Recent advances in reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text.