Integer Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language.
Entailment trees have been proposed to simulate the human reasoning process of explanation generation in the context of open--domain textual question answering.
We discuss the recent evolutionary arch of DL models in the direction of integrating prior biological relational and network knowledge to support better generalisation (e. g. pathways or Protein-Protein-Interaction networks) and interpretability.
A fundamental research goal for Explainable AI (XAI) is to build models that are capable of reasoning through the generation of natural language explanations.
This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support.
BioBERT and BioMegatron are Transformers models adapted for the biomedical domain based on publicly available biomedical corpora.
The ability of learning disentangled representations represents a major step for interpretable NLP systems as it allows latent linguistic features to be controlled.
Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference.
We present a context-preserving text simplification (TS) approach that recursively splits and rephrases complex English sentences into a semantic hierarchy of simplified sentences.
Natural language contexts display logical regularities with respect to substitutions of related concepts: these are captured in a functional order-theoretic property called monotonicity.
This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization.
An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities.
Probing (or diagnostic classification) has become a popular strategy for investigating whether a given set of intermediate features is present in the representations of neural models.
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).
Existing accounts of explanation emphasise the role of prior experience in the solution of new problems.
Text entailment, the task of determining whether a piece of text logically follows from another piece of text, is a key component in NLP, providing input for many semantic applications such as question answering, text summarization, information extraction, and machine translation, among others.
Neural networks are a prevalent and effective machine learning component, and their application is leading to significant scientific progress in many domains.
We utilise Richards-Engelhardt framework as a tool for understanding Natural Language Processing systems diagrams.
This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA).
Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text.
Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task.
Ranked #5 on Relation Extraction on SemEval 2018 Task 10
Artificial Intelligence models are becoming increasingly more powerful and accurate, supporting or even replacing humans' decision making.
In that way, we preserve the context of the relational tuples extracted from a source sentence, generating a novel lightweight semantic representation for Open IE that enhances the expressiveness of the extracted propositions.
We present an Open Information Extraction (IE) approach that uses a two-layered transformation stage consisting of a clausal disembedding layer and a phrasal disembedding layer, together with rhetorical relation identification.
This work provides a critique on the set of abstract relations used for semantic relation classification with regard to their ability to express relationships between terms which are found in a domain-specific corpora.
Semantic annotation is fundamental to deal with large-scale lexical information, mapping the information to an enumerable set of categories over which rules and algorithms can be applied, and foundational ontology classes can be used as a formal set of categories for such tasks.
Adopting a conceptual model composed of a set of semantic roles for dictionary definitions, we trained a classifier for automatically labeling definitions, preparing the data to be later converted to a graph representation.
Understanding the semantic relationships between terms is a fundamental task in natural language processing applications.
We provide a detailed overview of the various approaches that were proposed to date to solve the task of Open Information Extraction.
In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems.
This short paper outlines research results on object classification in images of Neoclassical furniture.