Search Results for author: Pontus Stenetorp

Found 47 papers, 14 papers with code

Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP (sustainlp) 2020 Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

Spike-inspired Rank Coding for Fast and Accurate Recurrent Neural Networks

no code implementations6 Oct 2021 Alan Jeffares, Qinghai Guo, Pontus Stenetorp, Timoleon Moraitis

We demonstrate these in two toy problems of sequence classification, and in a temporally-encoded MNIST dataset where our RC model achieves 99. 19% accuracy after the first input time-step, outperforming the state of the art in temporal coding with SNNs, as well as in spoken-word classification of Google Speech Commands, outperforming non-RC-trained early inference with LSTMs.

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

no code implementations EMNLP 2021 Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Research shows that natural language processing models are generally considered to be vulnerable to adversarial attacks; but recent work has drawn attention to the issue of validating these adversarial inputs against certain criteria (e. g., the preservation of semantics and grammaticality).

Sentiment Analysis Text Classification

Challenges in Generalization in Open Domain Question Answering

no code implementations2 Sep 2021 Linqing Liu, Patrick Lewis, Sebastian Riedel, Pontus Stenetorp

Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions.

Open-Domain Question Answering Systematic Generalization

Controllable Abstractive Dialogue Summarization with Sketch Supervision

1 code implementation Findings (ACL) 2021 Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, Caiming Xiong

In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control.

Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation

no code implementations EMNLP 2021 Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, Douwe Kiela

We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8. 8% of the time on average, compared to 17. 6% for a model trained without synthetic data.

Answer Selection Question Generation

Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity

no code implementations18 Apr 2021 Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp

When primed with only a handful of training samples, very large pretrained language models such as GPT-3, have shown competitive results when compared to fully-supervised fine-tuned large pretrained language models.

Text Classification

Don't Read Too Much into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP 2020 Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, Pontus Stenetorp

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

Learning Reasoning Strategies in End-to-End Differentiable Proving

2 code implementations ICML 2020 Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs).

Link Prediction Relational Reasoning

Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples

no code implementations EACL 2021 Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Recent efforts have shown that neural text processing models are vulnerable to adversarial examples, but the nature of these examples is poorly understood.

Classification General Classification +1

Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension

1 code implementation2 Feb 2020 Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, Pontus Stenetorp

We find that training on adversarially collected samples leads to strong generalisation to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop.

 Ranked #1 on Reading Comprehension on AdversarialQA (using extra training data)

Reading Comprehension

Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets

no code implementations21 Nov 2019 Saku Sugawara, Pontus Stenetorp, Kentaro Inui, Akiko Aizawa

Existing analysis work in machine reading comprehension (MRC) is largely concerned with evaluating the capabilities of systems.

Language understanding Machine Reading Comprehension

R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason

no code implementations ACL 2020 Naoya Inoue, Pontus Stenetorp, Kentaro Inui

Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets.

Multi-Hop Reading Comprehension

Towards Machine-assisted Meta-Studies: The Hubble Constant

no code implementations31 Jan 2019 Tom Crossland, Pontus Stenetorp, Sebastian Riedel, Daisuke Kawata, Thomas D. Kitching, Rupert A. C. Croft

We present an approach for automatic extraction of measured values from the astrophysical literature, using the Hubble constant for our pilot study.

On the Importance of Strong Baselines in Bayesian Deep Learning

1 code implementation23 Nov 2018 Jishnu Mukhoti, Pontus Stenetorp, Yarin Gal

Like all sub-fields of machine learning Bayesian Deep Learning is driven by empirical validation of its theoretical proposals.

UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)

no code implementations WS 2018 Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, Sebastian Riedel

In this paper we describe our 2nd place FEVER shared-task system that achieved a FEVER score of 62. 52{\%} on the provisional test set (without additional human evaluation), and 65. 41{\%} on the development set.

Information Retrieval Natural Language Inference +1

Jack the Reader -- A Machine Reading Framework

1 code implementation ACL 2018 Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Information Retrieval Language understanding +5

Jack the Reader - A Machine Reading Framework

2 code implementations20 Jun 2018 Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Language understanding Link Prediction +4

Extrapolation in NLP

no code implementations WS 2018 Jeff Mitchell, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data.

Constructing Datasets for Multi-hop Reading Comprehension Across Documents

no code implementations TACL 2018 Johannes Welbl, Pontus Stenetorp, Sebastian Riedel

We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods.

Multi-Hop Reading Comprehension

Convolutional 2D Knowledge Graph Embeddings

5 code implementations5 Jul 2017 Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets.

Knowledge Graph Embeddings Knowledge Graphs +1

Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies

no code implementations29 Dec 2016 Jonathan Godwin, Pontus Stenetorp, Sebastian Riedel

In this paper we present a novel Neural Network algorithm for conducting semi-supervised learning for sequence labeling tasks arranged in a linguistically motivated hierarchy.

Chunking

Learning to Reason With Adaptive Computation

no code implementations24 Oct 2016 Mark Neumann, Pontus Stenetorp, Sebastian Riedel

Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading.

Natural Language Inference Reading Comprehension

An Attentive Neural Architecture for Fine-grained Entity Type Classification

no code implementations WS 2016 Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel

In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.