Search Results for author: Pontus Stenetorp

Found 64 papers, 25 papers with code

Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP (sustainlp) 2020 Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

Prompt Optimisation with Random Sampling

1 code implementation16 Nov 2023 Yao Lu, Jiayi Wang, Sebastian Riedel, Pontus Stenetorp

Using the generative nature of a language model to generate task-relevant separators has shown competitive results compared to human-curated prompts like "TL;DR".

Language Modelling text-classification +1

How good are Large Language Models on African Languages?

no code implementations14 Nov 2023 Jessica Ojo, Kelechi Ogueji, Pontus Stenetorp, David I. Adelani

Our results suggest that all LLMs produce below-par performance on African languages, and there is a large gap in performance compared to high-resource languages like English most tasks.

In-Context Learning Language Modelling +8

Using Natural Language Explanations to Improve Robustness of In-context Learning for Natural Language Inference

no code implementations13 Nov 2023 Xuanli He, Yuxiang Wu, Oana-Maria Camburu, Pasquale Minervini, Pontus Stenetorp

Moreover, we introduce a new approach to X-ICL by prompting an LLM (ChatGPT in our case) with few human-generated NLEs to produce further NLEs (we call it ChatGPT few-shot), which we show superior to both ChatGPT zero-shot and human-generated NLEs alone.

In-Context Learning Natural Language Inference

Gender-specific Machine Translation with Large Language Models

no code implementations6 Sep 2023 Eduardo Sánchez, Pierre Andrews, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà

Decoder-only Large Language Models (LLMs) have demonstrated potential in machine translation (MT), albeit with performance slightly lagging behind traditional encoder-decoder Neural Machine Translation (NMT) systems.

In-Context Learning Machine Translation +2

Non-parametric, Nearest-neighbor-assisted Fine-tuning for Neural Machine Translation

no code implementations23 May 2023 Jiayi Wang, Ke Wang, Yuqi Zhang, Yu Zhao, Pontus Stenetorp

We explore whether such non-parametric models can improve machine translation models at the fine-tuning stage by incorporating statistics from the kNN predictions to inform the gradient updates for a baseline translation model.

Machine Translation Translation

G3Detector: General GPT-Generated Text Detector

no code implementations22 May 2023 Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp

The burgeoning progress in the field of Large Language Models (LLMs) heralds significant benefits due to their unparalleled capacities.

Text Detection

MasakhaNEWS: News Topic Classification for African languages

1 code implementation19 Apr 2023 David Ifeoluwa Adelani, Marek Masiak, Israel Abebe Azime, Jesujoba Alabi, Atnafu Lambebo Tonja, Christine Mwase, Odunayo Ogundepo, Bonaventure F. P. Dossou, Akintunde Oladipo, Doreen Nixdorf, Chris Chinenye Emezue, sana al-azzawi, Blessing Sibanda, Davis David, Lolwethu Ndolela, Jonathan Mukiibi, Tunde Ajayi, Tatiana Moteu, Brian Odhiambo, Abraham Owodunni, Nnaemeka Obiefuna, Muhidin Mohamed, Shamsuddeen Hassan Muhammad, Teshome Mulugeta Ababu, Saheed Abdullahi Salahudeen, Mesay Gemeda Yigezu, Tajuddeen Gwadabe, Idris Abdulmumin, Mahlet Taye, Oluwabusayo Awoyomi, Iyanuoluwa Shode, Tolulope Adelani, Habiba Abdulganiyu, Abdul-Hakeem Omotayo, Adetola Adeeko, Abeeb Afolabi, Anuoluwapo Aremu, Olanrewaju Samuel, Clemencia Siro, Wangari Kimotho, Onyekachi Ogbu, Chinedu Mbonu, Chiamaka Chukwuneke, Samuel Fanijo, Jessica Ojo, Oyinkansola Awosan, Tadesse Kebede, Toadoum Sari Sakayo, Pamela Nyatsine, Freedmore Sidume, Oreen Yousuf, Mardiyyah Oduwole, Tshinu Tshinu, Ussen Kimanuka, Thina Diko, Siyanda Nxakama, Sinodos Nigusse, Abdulmejid Johar, Shafie Mohamed, Fuad Mire Hassan, Moges Ahmed Mehamed, Evrard Ngabire, Jules Jules, Ivan Ssenkungu, Pontus Stenetorp

Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).

Classification Few-Shot Learning +6

An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks

1 code implementation30 Oct 2022 Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e. g., 25. 8 -> 44. 3 EM on NQ) while retaining a high throughput (e. g., 1000 queries/s on NQ).

Computational Efficiency Question Answering +1

Query Expansion Using Contextual Clue Sampling with Language Models

no code implementations13 Oct 2022 Linqing Liu, Minghan Li, Jimmy Lin, Sebastian Riedel, Pontus Stenetorp

To balance these two considerations, we propose a combination of an effective filtering strategy and fusion of the retrieved documents based on the generation probability of each context.

Information Retrieval Language Modelling +1

What the DAAM: Interpreting Stable Diffusion Using Cross Attention

1 code implementation10 Oct 2022 Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, Ferhan Ture

Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses.

Denoising Descriptive +3

ReFactor GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective

no code implementations20 Jul 2022 Yihong Chen, Pushkar Mishra, Luca Franceschi, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs).

Knowledge Graph Completion

Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

1 code implementation ACL 2022 Yuxiang Wu, Matt Gardner, Pontus Stenetorp, Pradeep Dasigi

We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data.

Natural Language Inference

Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants

no code implementations NAACL 2022 Max Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, Douwe Kiela

We collect training datasets in twenty experimental settings and perform a detailed analysis of this approach for the task of extractive question answering (QA) for both standard and adversarial data collection.

Extractive Question-Answering Question Answering

Spike-inspired Rank Coding for Fast and Accurate Recurrent Neural Networks

1 code implementation ICLR 2022 Alan Jeffares, Qinghai Guo, Pontus Stenetorp, Timoleon Moraitis

We demonstrate these in two toy problems of sequence classification, and in a temporally-encoded MNIST dataset where our RC model achieves 99. 19% accuracy after the first input time-step, outperforming the state of the art in temporal coding with SNNs, as well as in spoken-word classification of Google Speech Commands, outperforming non-RC-trained early inference with LSTMs.

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

1 code implementation EMNLP 2021 Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Research shows that natural language processing models are generally considered to be vulnerable to adversarial attacks; but recent work has drawn attention to the issue of validating these adversarial inputs against certain criteria (e. g., the preservation of semantics and grammaticality).

Sentiment Analysis Sentiment Classification +3

Challenges in Generalization in Open Domain Question Answering

1 code implementation Findings (NAACL) 2022 Linqing Liu, Patrick Lewis, Sebastian Riedel, Pontus Stenetorp

Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions.

Natural Questions Open-Domain Question Answering +3

Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity

1 code implementation ACL 2022 Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp

When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models.

text-classification Text Classification

Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation

no code implementations EMNLP 2021 Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, Douwe Kiela

We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8. 8% of the time on average, compared to 17. 6% for a model trained without synthetic data.

Answer Selection Question Generation

Don't Read Too Much into It: Adaptive Computation for Open-Domain Question Answering

no code implementations EMNLP 2020 Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, Pontus Stenetorp

Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer.

Open-Domain Question Answering

Learning Reasoning Strategies in End-to-End Differentiable Proving

2 code implementations ICML 2020 Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rocktäschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs).

Link Prediction Relational Reasoning

Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples

no code implementations EACL 2021 Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Recent efforts have shown that neural text processing models are vulnerable to adversarial examples, but the nature of these examples is poorly understood.

General Classification SST-2 +1

Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension

1 code implementation2 Feb 2020 Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, Pontus Stenetorp

We find that training on adversarially collected samples leads to strong generalisation to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop.

 Ranked #1 on Reading Comprehension on AdversarialQA (using extra training data)

Reading Comprehension

Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets

no code implementations21 Nov 2019 Saku Sugawara, Pontus Stenetorp, Kentaro Inui, Akiko Aizawa

Existing analysis work in machine reading comprehension (MRC) is largely concerned with evaluating the capabilities of systems.

Benchmarking Machine Reading Comprehension +1

R4C: A Benchmark for Evaluating RC Systems to Get the Right Answer for the Right Reason

no code implementations ACL 2020 Naoya Inoue, Pontus Stenetorp, Kentaro Inui

Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets.

Multi-Hop Reading Comprehension

Towards Machine-assisted Meta-Studies: The Hubble Constant

no code implementations31 Jan 2019 Tom Crossland, Pontus Stenetorp, Sebastian Riedel, Daisuke Kawata, Thomas D. Kitching, Rupert A. C. Croft

We present an approach for automatic extraction of measured values from the astrophysical literature, using the Hubble constant for our pilot study.

On the Importance of Strong Baselines in Bayesian Deep Learning

1 code implementation23 Nov 2018 Jishnu Mukhoti, Pontus Stenetorp, Yarin Gal

Like all sub-fields of machine learning Bayesian Deep Learning is driven by empirical validation of its theoretical proposals.

UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF)

no code implementations WS 2018 Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, Sebastian Riedel

In this paper we describe our 2nd place FEVER shared-task system that achieved a FEVER score of 62. 52{\%} on the provisional test set (without additional human evaluation), and 65. 41{\%} on the development set.

Information Retrieval Natural Language Inference +3

Jack the Reader -- A Machine Reading Framework

1 code implementation ACL 2018 Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rockt{\"a}schel, Matko Bo{\v{s}}njak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Information Retrieval Link Prediction +4

Jack the Reader - A Machine Reading Framework

2 code implementations20 Jun 2018 Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bošnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel

For example, in Question Answering, the supporting text can be newswire or Wikipedia articles; in Natural Language Inference, premises can be seen as the supporting text and hypotheses as questions.

Link Prediction Natural Language Inference +3

Extrapolation in NLP

no code implementations WS 2018 Jeff Mitchell, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data.

Constructing Datasets for Multi-hop Reading Comprehension Across Documents

no code implementations TACL 2018 Johannes Welbl, Pontus Stenetorp, Sebastian Riedel

We propose a novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods.

Multi-Hop Reading Comprehension Sentence

Convolutional 2D Knowledge Graph Embeddings

8 code implementations5 Jul 2017 Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel

In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets.

 Ranked #1 on Link Prediction on WN18 (using extra training data)

Knowledge Graph Embeddings Knowledge Graphs +1

Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies

no code implementations29 Dec 2016 Jonathan Godwin, Pontus Stenetorp, Sebastian Riedel

In this paper we present a novel Neural Network algorithm for conducting semi-supervised learning for sequence labeling tasks arranged in a linguistically motivated hierarchy.

Chunking

Learning to Reason With Adaptive Computation

no code implementations24 Oct 2016 Mark Neumann, Pontus Stenetorp, Sebastian Riedel

Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading.

BIG-bench Machine Learning Natural Language Inference +1

An Attentive Neural Architecture for Fine-grained Entity Type Classification

no code implementations WS 2016 Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel

In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.