no code implementations • EMNLP (sustainlp) 2020 • Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.
1 code implementation • 31 May 2023 • Aryo Pradipta Gema, Dominik Grabarczyk, Wolf De Wulf, Piyush Borole, Javier Antonio Alfaro, Pasquale Minervini, Antonio Vergari, Ajitha Rajan
We achieve a three-fold improvement in terms of performance based on the HITS@10 score over previous work on the same biomedical knowledge graph.
no code implementations • 22 May 2023 • Jesus Solano, Oana-Maria Camburu, Pasquale Minervini
Explaining the decisions of neural models is crucial for ensuring their trustworthiness at deployment time.
no code implementations • 22 May 2023 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei
We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.
no code implementations • 29 Jan 2023 • Erik Arakelyan, Pasquale Minervini, Isabelle Augenstein
Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge.
no code implementations • 17 Nov 2022 • Adrianna Janik, Maria Torrente, Luca Costabello, Virginia Calvo, Brian Walsh, Carlos Camps, Sameh K. Mohamed, Ana L. Ortega, Vít Nováček, Bartomeu Massutí, Pasquale Minervini, M. Rosario Garcia Campelo, Edel del Barco, Joaquim Bosch-Barrera, Ernestina Menasalvas, Mohan Timilsina, Mariano Provencio
Conclusions: Our results show that machine learning models trained on tabular and graph data can enable objective, personalised and reproducible prediction of relapse and therefore, disease outcome in patients with early-stage NSCLC.
1 code implementation • 30 Oct 2022 • Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e. g., 25. 8 -> 44. 3 EM on NQ) while retaining a high throughput (e. g., 1000 queries/s on NQ).
Ranked #4 on
Question Answering
on KILT: ELI5
no code implementations • 27 Oct 2022 • Andrew J. Wren, Pasquale Minervini, Luca Franceschi, Valentina Zantedeschi
Recently continuous relaxations have been proposed in order to learn Directed Acyclic Graphs (DAGs) from data by backpropagation, instead of using combinatorial optimization.
1 code implementation • 11 Sep 2022 • Pasquale Minervini, Luca Franceschi, Mathias Niepert
In this work, we present Adaptive IMLE (AIMLE), the first adaptive gradient estimator for complex discrete distributions: it adaptively identifies the target distribution for IMLE by trading off the density of gradient information with the degree of bias in the gradient estimates.
no code implementations • 20 Jul 2022 • Yihong Chen, Pushkar Mishra, Luca Franceschi, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs).
1 code implementation • 23 May 2022 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Marek Rei
We can further improve model performance and span-level decisions by using the e-SNLI explanations during training.
1 code implementation • 12 Apr 2022 • Han Zhou, Ignacio Iacobacci, Pasquale Minervini
Dialogue State Tracking (DST), a crucial component of task-oriented dialogue (ToD) systems, keeps track of all important information pertaining to dialogue history: filling slots with the most probable values throughout the conversation.
1 code implementation • COLING 2022 • Saadullah Amin, Pasquale Minervini, David Chang, Pontus Stenetorp, Günter Neumann
Relation extraction in the biomedical domain is challenging due to the lack of labeled data and high annotation costs, needing domain experts.
no code implementations • 20 Mar 2022 • Wanshui Li, Pasquale Minervini
Contemporary neural networks have achieved a series of developments and successes in many aspects; however, when exposed to data outside the training distribution, they may fail to predict correct answers.
1 code implementation • 25 Oct 2021 • Jatin Chauhan, Priyanshu Gupta, Pasquale Minervini
We present NNMFAug, a probabilistic framework to perform data augmentation for the task of knowledge graph completion to counter the problem of data scarcity, which can enhance the learning process of neural link predictors.
1 code implementation • AKBC 2021 • Yihong Chen, Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp
Learning good representations on multi-relational graphs is essential to knowledge base completion (KBC).
Ranked #1 on
Link Prediction
on CoDEx Small
no code implementations • 29 Sep 2021 • Medina Andresel, Daria Stepanova, Trung-Kien Tran, Csaba Domokos, Pasquale Minervini
Recently, low-dimensional vector space representations of Knowledge Graphs (KGs) have been applied to find answers to logical queries over incomplete KGs.
1 code implementation • ACL 2021 • Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Adaptive Computation (AC) has been shown to be effective in improving the efficiency of Open-Domain Question Answering (ODQA) systems.
1 code implementation • AKBC 2021 • Agnieszka Dobrowolska, Antonio Vergari, Pasquale Minervini
In this work, we investigate how to learn novel concepts in Knowledge Graphs (KGs) in a principled way, and how to effectively exploit them to produce more accurate neural link prediction models.
2 code implementations • NeurIPS 2021 • Mathias Niepert, Pasquale Minervini, Luca Franceschi
We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components.
1 code implementation • 13 Feb 2021 • Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, Sebastian Riedel
We introduce a new QA-pair retriever, RePAQ, to complement PAQ.
1 code implementation • 8 Feb 2021 • Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktaschel
In this work, we show that we can incorporate relational inductive biases, encoded in the form of relational graphs, into agents.
1 code implementation • EACL 2021 • Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, Pasquale Minervini
The first approach is an online method which is effective at removing skew at the expense of stereotype.
no code implementations • 1 Jan 2021 • Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih
We review the EfficientQA competition from NeurIPS 2020.
no code implementations • EMNLP 2020 • Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, Pontus Stenetorp
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.
2 code implementations • ICLR 2021 • Erik Arakelyan, Daniel Daza, Pasquale Minervini, Michael Cochez
Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms.
Ranked #2 on
Complex Query Answering
on NELL-995
1 code implementation • ICML Workshop LaReL 2020 • Minqi Jiang, Jelena Luketina, Nantas Nardelli, Pasquale Minervini, Philip H. S. Torr, Shimon Whiteson, Tim Rocktäschel
This is partly due to the lack of lightweight simulation environments that sufficiently reflect the semantics of the real world and provide knowledge sources grounded with respect to observations in an RL environment.
2 code implementations • ICML 2020 • Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel
Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs).
Ranked #1 on
Relational Reasoning
on CLUTRR (k=3)
no code implementations • 30 Apr 2020 • Federico Bianchi, Gaetano Rossiello, Luca Costabello, Matteo Palmonari, Pasquale Minervini
Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces.
1 code implementation • EMNLP 2020 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Johannes Welbl, Pasquale Minervini, Max Bartolo, Pontus Stenetorp, Sebastian Riedel
Current reading comprehension models generalise well to in-distribution test sets, yet perform poorly on adversarially selected inputs.
3 code implementations • 17 Dec 2019 • Pasquale Minervini, Matko Bošnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.
Ranked #3 on
Link Prediction
on FB122
1 code implementation • ACL 2020 • Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom
To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.
1 code implementation • ACL 2019 • Leon Weber, Pasquale Minervini, Jannes Münchmeyer, Ulf Leser, Tim Rocktäschel
In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret.
1 code implementation • 12 Jun 2019 • Alexander I. Cowen-Rivers, Pasquale Minervini, Tim Rocktaschel, Matko Bosnjak, Sebastian Riedel, Jun Wang
Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.
no code implementations • ICLR 2019 • Leon Weber, Pasquale Minervini, Ulf Leser, Tim Rocktäschel
Currently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability.
no code implementations • ICLR 2019 • Alexander I. Cowen-Rivers, Pasquale Minervini
While traditional variational methods derive an analytical approximation for the intractable distribution over the latent variables, here we construct an inference network conditioned on the symbolic representation of entities and relation types in the Knowledge Graph, to provide the variational distributions.
no code implementations • ICLR 2019 • Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Edward Grefenstette, Sebastian Riedel
Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.
no code implementations • 16 Dec 2018 • Emir Muñoz, Pasquale Minervini, Matthias Nickles
Neural link predictors learn distributed representations of entities and relations in a knowledge graph.
2 code implementations • CONLL 2018 • Pasquale Minervini, Sebastian Riedel
They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation.
no code implementations • 21 Jul 2018 • Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel
Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest.
1 code implementation • ACL 2018 • Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel
For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.
2 code implementations • 20 Jun 2018 • Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel
For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.
no code implementations • WS 2018 • Jeff Mitchell, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data.
1 code implementation • 24 Jul 2017 • Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, Sebastian Riedel
The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples.
8 code implementations • 5 Jul 2017 • Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets.
Ranked #1 on
Link Prediction
on WN18