no code implementations • EMNLP (sustainlp) 2020 • Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.
no code implementations • 3 Jul 2023 • Yihong Chen, Kelly Marchisio, Roberta Raileanu, David Ifeoluwa Adelani, Pontus Stenetorp, Sebastian Riedel, Mikel Artetxe
Pretrained language models (PLMs) are today the primary model for natural language processing.
1 code implementation • 20 Feb 2023 • Nathanaël Carraz Rakotonirina, Roberto Dessì, Fabio Petroni, Sebastian Riedel, Marco Baroni
We study whether automatically-induced prompts that effectively extract information from a language model can also be used, out-of-the-box, to probe other language models for the same information.
1 code implementation • 16 Nov 2022 • Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, Wen-tau Yih
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries.
1 code implementation • 30 Oct 2022 • Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e. g., 25. 8 -> 44. 3 EM on NQ) while retaining a high throughput (e. g., 1000 queries/s on NQ).
Ranked #4 on
Question Answering
on KILT: ELI5
no code implementations • 13 Oct 2022 • Linqing Liu, Minghan Li, Jimmy Lin, Sebastian Riedel, Pontus Stenetorp
To balance these two considerations, we propose a combination of an effective filtering strategy and fusion of the retrieved documents based on the generation probability of each context.
1 code implementation • 27 Sep 2022 • Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, Fabio Petroni
Evaluation of text generation to date has primarily focused on content created sequentially, rather than improvements on a piece of text.
no code implementations • 24 Aug 2022 • Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, Sebastian Riedel
Textual content is often the output of a collaborative writing process: We start with an initial draft, ask for suggestions, and repeatedly make changes.
1 code implementation • 5 Aug 2022 • Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave
Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings.
Ranked #1 on
Question Answering
on Natural Questions
no code implementations • 20 Jul 2022 • Yihong Chen, Pushkar Mishra, Luca Franceschi, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs).
1 code implementation • 8 Jul 2022 • Fabio Petroni, Samuel Broscheit, Aleksandra Piktus, Patrick Lewis, Gautier Izacard, Lucas Hosseini, Jane Dwivedi-Yu, Maria Lomeli, Timo Schick, Pierre-Emmanuel Mazaré, Armand Joulin, Edouard Grave, Sebastian Riedel
Hence, maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort.
1 code implementation • 25 May 2022 • Nora Kassner, Fabio Petroni, Mikhail Plekhanov, Sebastian Riedel, Nicola Cancedda
This paper created the Unknown Entity Discovery and Indexing (EDIN) benchmark where unknown entities, that is entities without a description in the knowledge base and labeled mentions, have to be integrated into an existing entity linking system.
no code implementations • NAACL 2022 • Jonas Pfeiffer, Naman Goyal, Xi Victoria Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages.
no code implementations • Findings (ACL) 2022 • Daniel Simig, Fabio Petroni, Pouya Yanki, Kashyap Popat, Christina Du, Sebastian Riedel, Majid Yazdani
To develop systems that simplify this process, we introduce the task of open vocabulary XMC (OXMC): given a piece of content, predict a set of labels, some of which may be outside of the known tag set.
1 code implementation • 22 Apr 2022 • Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen-tau Yih, Sebastian Riedel, Fabio Petroni
Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus.
2 code implementations • 18 Dec 2021 • Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel
In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise.
no code implementations • NAACL 2022 • Max Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, Douwe Kiela
We collect training datasets in twenty experimental settings and perform a detailed analysis of this approach for the task of extractive question answering (QA) for both standard and adversarial data collection.
5 code implementations • 16 Dec 2021 • Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave
In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings.
no code implementations • NAACL 2022 • Patrick Lewis, Barlas Oğuz, Wenhan Xiong, Fabio Petroni, Wen-tau Yih, Sebastian Riedel
DrBoost is trained in stages: each component model is learned sequentially and specialized by focusing only on retrieval mistakes made by the current ensemble.
1 code implementation • 8 Oct 2021 • Yuval Kirstain, Patrick Lewis, Sebastian Riedel, Omer Levy
We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks.
1 code implementation • AKBC 2021 • Yihong Chen, Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp
Learning good representations on multi-relational graphs is essential to knowledge base completion (KBC).
Ranked #1 on
Link Prediction
on CoDEx Small
no code implementations • 29 Sep 2021 • Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave
By contrast, in many other NLP tasks, conventional self-supervised pre-training based on masking leads to strong generalization with small number of training examples.
1 code implementation • Findings (NAACL) 2022 • Linqing Liu, Patrick Lewis, Sebastian Riedel, Pontus Stenetorp
Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions.
1 code implementation • 25 Aug 2021 • Amrith Krishna, Sebastian Riedel, Andreas Vlachos
Fact verification systems typically rely on neural network classifiers for veracity prediction which lack explainability.
Ranked #1 on
Fact Verification
on FEVER
1 code implementation • Findings (NAACL) 2022 • Barlas Oğuz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, Yashar Mehdad
Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks.
Ranked #2 on
Passage Retrieval
on Natural Questions
(using extra training data)
1 code implementation • ACL 2021 • Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Adaptive Computation (AC) has been shown to be effective in improving the efficiency of Open-Domain Question Answering (ODQA) systems.
2 code implementations • Findings (ACL) 2022 • Robert L. Logan IV, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh, Sebastian Riedel
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning.
1 code implementation • ACL 2021 • James Thorne, Majid Yazdani, Marzieh Saeidi, Fabrizio Silvestri, Sebastian Riedel, Alon Halevy
Neural models have shown impressive performance gains in answering queries from natural language text.
1 code implementation • ACL 2022 • Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp
When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models.
no code implementations • EMNLP 2021 • Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, Douwe Kiela
We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8. 8% of the time on average, compared to 17. 6% for a model trained without synthetic data.
no code implementations • NAACL 2021 • Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking.
1 code implementation • 23 Mar 2021 • Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni
Moreover, in a zero-shot setting on languages with no training data at all, mGENRE treats the target language as a latent variable that is marginalized at prediction time.
Ranked #2 on
Entity Disambiguation
on Mewsli-9
(using extra training data)
1 code implementation • 13 Feb 2021 • Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, Sebastian Riedel
We introduce a new QA-pair retriever, RePAQ, to complement PAQ.
no code implementations • 1 Jan 2021 • Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen-tau Yih
We review the EfficientQA competition from NeurIPS 2020.
1 code implementation • 30 Dec 2020 • Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, Edouard Grave
Recently, retrieval systems based on dense representations have led to important improvements in open-domain question answering, and related tasks.
no code implementations • ACL 2021 • Michael Schlichtkrull, Vladimir Karpukhin, Barlas Oğuz, Mike Lewis, Wen-tau Yih, Sebastian Riedel
Structured information is an important knowledge source for automatic verification of factual claims.
no code implementations • EMNLP 2020 • Angela Fan, Aleksandra Piktus, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Andreas Vlachos, Antoine Bordes, Sebastian Riedel
Fact checking at scale is difficult -- while the number of active fact checking websites is growing, it remains too small for the needs of the contemporary media ecosystem.
no code implementations • EMNLP 2020 • Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, Pontus Stenetorp
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.
no code implementations • 14 Oct 2020 • James Thorne, Majid Yazdani, Marzieh Saeidi, Fabrizio Silvestri, Sebastian Riedel, Alon Halevy
We describe NeuralDB, a database system with no pre-defined schema, in which updates and queries are given in natural language.
2 code implementations • ICLR 2021 • Nicola De Cao, Gautier Izacard, Sebastian Riedel, Fabio Petroni
For instance, Encyclopedias such as Wikipedia are structured by entities (e. g., one per Wikipedia article).
Ranked #1 on
Entity Linking
on Derczynski
1 code implementation • ICLR 2021 • Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER.
Ranked #16 on
Question Answering
on HotpotQA
3 code implementations • NAACL 2021 • Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, Sebastian Riedel
We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.
Ranked #3 on
Entity Linking
on KILT: WNED-CWEB
1 code implementation • EACL 2021 • Patrick Lewis, Pontus Stenetorp, Sebastian Riedel
We also find that 30% of test-set questions have a near-duplicate paraphrase in their corresponding training sets.
2 code implementations • ICML 2020 • Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel
Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs).
Ranked #1 on
Relational Reasoning
on CLUTRR (k=3)
1 code implementation • 1 Jun 2020 • Federico Errica, Ludovic Denoyer, Bora Edizel, Fabio Petroni, Vassilis Plachouras, Fabrizio Silvestri, Sebastian Riedel
We propose a model to tackle classification tasks in the presence of very little training data.
5 code implementations • NeurIPS 2020 • Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.
Ranked #4 on
Question Answering
on WebQuestions
1 code implementation • ACL 2020 • Pengcheng Yin, Graham Neubig, Wen-tau Yih, Sebastian Riedel
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
Ranked #6 on
Text-To-SQL
on spider
no code implementations • AKBC 2020 • Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel
When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering.
1 code implementation • EMNLP 2020 • Marcin Kardas, Piotr Czapla, Pontus Stenetorp, Sebastian Ruder, Sebastian Riedel, Ross Taylor, Robert Stojnic
Tracking progress in machine learning has become increasingly difficult with the recent explosion in the number of papers.
1 code implementation • EMNLP 2020 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Johannes Welbl, Pasquale Minervini, Max Bartolo, Pontus Stenetorp, Sebastian Riedel
Current reading comprehension models generalise well to in-distribution test sets, yet perform poorly on adversarially selected inputs.
1 code implementation • 2 Feb 2020 • Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, Pontus Stenetorp
We find that training on adversarially collected samples leads to strong generalisation to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop.
Ranked #1 on
Reading Comprehension
on AdversarialQA
(using extra training data)
3 code implementations • 17 Dec 2019 • Pasquale Minervini, Matko Bošnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.
Ranked #3 on
Link Prediction
on FB122
3 code implementations • EMNLP 2020 • Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer
This paper introduces a conceptually simple, scalable, and highly effective BERT-based entity linking model, along with an extensive evaluation of its accuracy-speed trade-off.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras, Fabrizio Silvestri, Sebastian Riedel
A generated sentence is verifiable if it can be corroborated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy.
4 code implementations • ACL 2020 • Patrick Lewis, Barlas Oğuz, Ruty Rinott, Sebastian Riedel, Holger Schwenk
An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.
1 code implementation • 9 Oct 2019 • Viswanath Sivakumar, Olivier Delalleau, Tim Rocktäschel, Alexander H. Miller, Heinrich Küttler, Nantas Nardelli, Mike Rabbat, Joelle Pineau, Sebastian Riedel
This is largely an artifact of building on top of frameworks designed for RL in games (e. g. OpenAI Gym).
no code implementations • 25 Sep 2019 • Federico Errica, Fabrizio Silvestri, Bora Edizel, Sebastian Riedel, Ludovic Denoyer, Vassilis Plachouras
We propose a model to tackle classification tasks in the presence of very little training data.
1 code implementation • IJCNLP 2019 • Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks.
no code implementations • 27 Jun 2019 • Sebastian Riedel, Freek Stulp
Physical modeling of robotic system behavior is the foundation for controlling many robotic mechanisms to a satisfactory degree.
1 code implementation • ACL 2019 • Patrick Lewis, Ludovic Denoyer, Sebastian Riedel
We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically.
1 code implementation • 12 Jun 2019 • Alexander I. Cowen-Rivers, Pasquale Minervini, Tim Rocktaschel, Matko Bosnjak, Sebastian Riedel, Jun Wang
Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data.
no code implementations • ICLR 2019 • Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Edward Grefenstette, Sebastian Riedel
Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering.
1 code implementation • NAACL 2019 • Tom Hosking, Sebastian Riedel
Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation.
Ranked #12 on
Question Generation
on SQuAD1.1
no code implementations • 31 Jan 2019 • Tom Crossland, Pontus Stenetorp, Sebastian Riedel, Daisuke Kawata, Thomas D. Kitching, Rupert A. C. Croft
We present an approach for automatic extraction of measured values from the astrophysical literature, using the Hubble constant for our pilot study.
no code implementations • 23 Nov 2018 • Spiros Denaxas, Pontus Stenetorp, Sebastian Riedel, Maria Pikoula, Richard Dobson, Harry Hemingway
Electronic health records (EHR) are increasingly being used for constructing disease risk prediction models.
no code implementations • WS 2018 • Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
In this paper we describe our 2nd place FEVER shared-task system that achieved a FEVER score of 62. 52{\%} on the provisional test set (without additional human evaluation), and 65. 41{\%} on the development set.
no code implementations • 27 Sep 2018 • Tom Hosking, Sebastian Riedel
Question generation is an important task for improving our ability to process natural language data, with additional challenges over other sequence transformation tasks.
no code implementations • 6 Sep 2018 • Andres Campero, Aldo Pareja, Tim Klinger, Josh Tenenbaum, Sebastian Riedel
Our approach is neuro-symbolic in the sense that the rule pred- icates and core facts are given dense vector representations.
no code implementations • EMNLP 2018 • Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, Sebastian Riedel
This task requires both the interpretation of rules and the application of background knowledge.
2 code implementations • CONLL 2018 • Pasquale Minervini, Sebastian Riedel
They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation.
no code implementations • 21 Jul 2018 • Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel
Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest.
1 code implementation • ACL 2018 • Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel
For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.
2 code implementations • 20 Jun 2018 • Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel
For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.
1 code implementation • ACL 2018 • Georgios P. Spithourakis, Sebastian Riedel
In this paper, we explore different strategies for modelling numerals with language models, such as memorisation and digit-by-digit composition, and propose a novel neural architecture that uses a continuous probability density function to model numerals from an open vocabulary.
no code implementations • WS 2018 • Jeff Mitchell, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data.
no code implementations • NAACL 2018 • Vicente Ivan Sanchez Carmona, Jeff Mitchell, Sebastian Riedel
Natural Language Inference is a challenging task that has received substantial attention, and state-of-the-art models now achieve impressive test set performance in the form of accuracy scores.
no code implementations • 22 Apr 2018 • Jeff Mitchell, Sebastian Riedel
We investigate applying repurposed generic QA data and models to a recently proposed relation extraction task.
no code implementations • TACL 2018 • Johannes Welbl, Pontus Stenetorp, Sebastian Riedel
We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods.
Ranked #9 on
Question Answering
on WikiHop
1 code implementation • 24 Jul 2017 • Pasquale Minervini, Thomas Demeester, Tim Rocktäschel, Sebastian Riedel
The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples.
9 code implementations • 11 Jul 2017 • Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, Sebastian Riedel
Identifying public misinformation is a complicated and challenging task.
Ranked #5 on
Fake News Detection
on FNC-1
8 code implementations • 5 Jul 2017 • Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets.
Ranked #1 on
Link Prediction
on WN18
2 code implementations • CONLL 2017 • Ed Collins, Isabelle Augenstein, Sebastian Riedel
Automatic summarisation is a popular approach to reduce a document to its main arguments.
3 code implementations • NeurIPS 2017 • Tim Rocktäschel, Sebastian Riedel
We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols.
1 code implementation • SEMEVAL 2017 • Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, Andrew McCallum
We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials.
no code implementations • EACL 2017 • Renars Liepins, Ulrich Germann, Guntis Barzdins, Alex Birch, ra, Steve Renals, Susanne Weber, Peggy van der Kreeft, Herv{\'e} Bourlard, Jo{\~a}o Prieto, Ond{\v{r}}ej Klejch, Peter Bell, Alex Lazaridis, ros, Alfonso Mendes, Sebastian Riedel, Mariana S. C. Almeida, Pedro Balage, Shay B. Cohen, Tomasz Dwojak, Philip N. Garner, Andreas Giefer, Marcin Junczys-Dowmunt, Hina Imran, David Nogueira, Ahmed Ali, Mir, Sebasti{\~a}o a, Andrei Popescu-Belis, Lesly Miculicich Werlen, Nikos Papasarantopoulos, Abiola Obamuyide, Clive Jones, Fahim Dalvi, Andreas Vlachos, Yang Wang, Sibo Tong, Rico Sennrich, Nikolaos Pappas, Shashi Narayan, Marco Damonte, Nadir Durrani, Sameer Khurana, Ahmed Abdelali, Hassan Sajjad, Stephan Vogel, David Sheppey, Chris Hernon, Jeff Mitchell
We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
no code implementations • EACL 2017 • Ivan Sanchez, Sebastian Riedel
One key property of word embeddings currently under study is their capacity to encode hypernymy.
no code implementations • EACL 2017 • Andreas Vlachos, Gerasimos Lampouras, Sebastian Riedel
Imitation learning is a learning paradigm originally developed to learn robotic controllers from demonstrations by humans, e. g. autonomous flight from pilot demonstrations.
2 code implementations • 22 Feb 2017 • Théo Trouillon, Christopher R. Dance, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard
In statistical relational learning, knowledge graph completion deals with automatically understanding the structure of large knowledge graphs---labeled directed graphs---and predicting missing relationships---labeled edges.
Ranked #2 on
Knowledge Graphs
on FB15k
no code implementations • 15 Feb 2017 • Michał Daniluk, Tim Rocktäschel, Johannes Welbl, Sebastian Riedel
This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history.
no code implementations • 17 Jan 2017 • Marzieh Saeidi, Alessandro Venerandi, Licia Capra, Sebastian Riedel
We use the text from QA platform of Yahoo!
no code implementations • 29 Dec 2016 • Jonathan Godwin, Pontus Stenetorp, Sebastian Riedel
In this paper we present a novel Neural Network algorithm for conducting semi-supervised learning for sequence labeling tasks arranged in a linguistically motivated hierarchy.
5 code implementations • 24 Nov 2016 • Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, Sebastian Riedel
By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline.
no code implementations • 30 Oct 2016 • Jason Naradowsky, Sebastian Riedel
In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed.
no code implementations • 24 Oct 2016 • Mark Neumann, Pontus Stenetorp, Sebastian Riedel
Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading.
no code implementations • WS 2016 • Georgios P. Spithourakis, Steffen E. Petersen, Sebastian Riedel
In this paper, we investigate how grounded and conditional extensions to standard neural language models can bring improvements in the tasks of word prediction and completion.
no code implementations • COLING 2016 • Marzieh Saeidi, Guillaume Bouchard, Maria Liakata, Sebastian Riedel
In this paper, we introduce the task of targeted aspect-based sentiment analysis.
Ranked #5 on
Aspect-Based Sentiment Analysis (ABSA)
on Sentihood
7 code implementations • WS 2016 • Ben Eisner, Tim Rocktäschel, Isabelle Augenstein, Matko Bošnjak, Sebastian Riedel
Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings.
no code implementations • EMNLP 2016 • Georgios P. Spithourakis, Isabelle Augenstein, Sebastian Riedel
Semantic error detection and correction is an important task for applications such as fact checking, speech-to-text or grammatical error correction.
no code implementations • EMNLP 2016 • Thomas Demeester, Tim Rocktäschel, Sebastian Riedel
Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks.
8 code implementations • 20 Jun 2016 • Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard
In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases.
Ranked #4 on
Link Prediction
on FB122
1 code implementation • EACL 2017 • Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
In this work, we investigate several neural network architectures for fine-grained entity type classification.
no code implementations • 4 Jun 2016 • Vladyslav Kolesnyk, Tim Rocktäschel, Sebastian Riedel
We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention.
1 code implementation • ICML 2017 • Matko Bošnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel
Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model.
no code implementations • WS 2016 • Johannes Welbl, Guillaume Bouchard, Sebastian Riedel
Embedding-based Knowledge Base Completion models have so far mostly combined distributed representations of individual entities or relations to compute truth scores of missing links.
no code implementations • WS 2016 • Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts.
2 code implementations • 17 Sep 2015 • Antonio Trenta, Anthony Hunter, Sebastian Riedel
An evidence table has columns for the patient group, for each of the interventions being compared, for the criterion for the comparison (e. g. proportion who survived after 5 years from treatment), and for each of the results.
no code implementations • 14 Nov 2013 • Sameer Singh, Sebastian Riedel, Andrew McCallum
Belief Propagation has been widely used for marginal inference, however it is slow on problems with large-domain variables and high-order factors.
no code implementations • 26 Sep 2013 • Hung Bui, Tuyen Huynh, Sebastian Riedel
This automorphism group provides a precise mathematical framework for lifted inference in the general exponential family.
no code implementations • NeurIPS 2012 • David Belanger, Alexandre Passos, Sebastian Riedel, Andrew McCallum
Linear chains and trees are basic building blocks in many applications of graphical models.