Search Results for author: Pasquale Minervini

Found 32 papers, 20 papers with code

Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP (sustainlp) 2020 Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

A Probabilistic Framework for Knowledge Graph Data Augmentation

no code implementations25 Oct 2021 Jatin Chauhan, Priyanshu Gupta, Pasquale Minervini

We present NNMFAug, a probabilistic framework to perform data augmentation for the task of knowledge graph completion to counter the problem of data scarcity, which can enhance the learning process of neural link predictors.

Data Augmentation Knowledge Graph Completion +1

Neural Concept Formation in Knowledge Graphs

1 code implementation AKBC 2021 Agnieszka Dobrowolska, Antonio Vergari, Pasquale Minervini

In this work, we investigate how to learn novel concepts in Knowledge Graphs (KGs) in a principled way, and how to effectively exploit them to produce more accurate neural link prediction models.

Knowledge Graphs Link Prediction

Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

1 code implementation NeurIPS 2021 Mathias Niepert, Pasquale Minervini, Luca Franceschi

We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components.

Combinatorial Optimization

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning

2 code implementations8 Feb 2021 Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktaschel

In this work, we show that we can incorporate relational inductive biases, encoded in the form of relational graphs, into agents.

Don't Read Too Much into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP 2020 Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, Pontus Stenetorp

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

Complex Query Answering with Neural Link Predictors

2 code implementations ICLR 2021 Erik Arakelyan, Daniel Daza, Pasquale Minervini, Michael Cochez

Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms.

Knowledge Graphs

WordCraft: An Environment for Benchmarking Commonsense Agents

1 code implementation ICML Workshop LaReL 2020 Minqi Jiang, Jelena Luketina, Nantas Nardelli, Pasquale Minervini, Philip H. S. Torr, Shimon Whiteson, Tim Rocktäschel

This is partly due to the lack of lightweight simulation environments that sufficiently reflect the semantics of the real world and provide knowledge sources grounded with respect to observations in an RL environment.

Knowledge Graphs Representation Learning

Learning Reasoning Strategies in End-to-End Differentiable Proving

2 code implementations ICML 2020 Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs).

Link Prediction Relational Reasoning

Knowledge Graph Embeddings and Explainable AI

no code implementations30 Apr 2020 Federico Bianchi, Gaetano Rossiello, Luca Costabello, Matteo Palmonari, Pasquale Minervini

Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces.

Knowledge Graph Embeddings

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

1 code implementation EMNLP 2020 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel

Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.

Natural Language Inference

Differentiable Reasoning on Large Knowledge Bases and Natural Language

3 code implementations17 Dec 2019 Pasquale Minervini, Matko Bošnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.

Link Prediction Question Answering +1

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

1 code implementation ACL 2020 Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom

To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.

Decision Making Natural Language Inference

NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

1 code implementation ACL 2019 Leon Weber, Pasquale Minervini, Jannes Münchmeyer, Ulf Leser, Tim Rocktäschel

In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret.

Question Answering

Neural Variational Inference For Estimating Uncertainty in Knowledge Graph Embeddings

1 code implementation12 Jun 2019 Alexander I. Cowen-Rivers, Pasquale Minervini, Tim Rocktaschel, Matko Bosnjak, Sebastian Riedel, Jun Wang

Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.

Knowledge Graph Embeddings Knowledge Graphs +3

Scalable Neural Theorem Proving on Knowledge Bases and Natural Language

no code implementations ICLR 2019 Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Edward Grefenstette, Sebastian Riedel

Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.

Automated Theorem Proving Link Prediction +2

Neural Variational Inference For Embedding Knowledge Graphs

no code implementations ICLR 2019 Alexander I. Cowen-Rivers, Pasquale Minervini

While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions.

Knowledge Graphs Latent Variable Models +1

NLProlog: Reasoning with Weak Unification for Natural Language Question Answering

no code implementations ICLR 2019 Leon Weber, Pasquale Minervini, Ulf Leser, Tim Rocktäschel

Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability.

Question Answering

Embedding Cardinality Constraints in Neural Link Predictors

no code implementations16 Dec 2018 Emir Muñoz, Pasquale Minervini, Matthias Nickles

Neural link predictors learn distributed representations of entities and relations in a knowledge graph.

Knowledge Base Completion Link Prediction

Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge

2 code implementations CONLL 2018 Pasquale Minervini, Sebastian Riedel

They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation.

Language Modelling Natural Language Inference

Towards Neural Theorem Proving at Scale

no code implementations21 Jul 2018 Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel

Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest.

Automated Theorem Proving Representation Learning

Jack the Reader -- A Machine Reading Framework

1 code implementation ACL 2018 Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Information Retrieval Language understanding +5

Jack the Reader - A Machine Reading Framework

2 code implementations20 Jun 2018 Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Language understanding Link Prediction +4

Extrapolation in NLP

no code implementations WS 2018 Jeff Mitchell, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data.

Adversarial Sets for Regularising Neural Link Predictors

1 code implementation24 Jul 2017 Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, Sebastian Riedel

The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples.

Link Prediction Relational Reasoning

Convolutional 2D Knowledge Graph Embeddings

5 code implementations5 Jul 2017 Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets.

Knowledge Graph Embeddings Knowledge Graphs +1

Cannot find the paper you are looking for? You can Submit a new open access paper.