Search Results for author: Oscar Sainz

Found 12 papers, 9 papers with code

Ask2Transformers: Zero-Shot Domain labelling with Pretrained Language Models

1 code implementation EACL (GWC) 2021 Oscar Sainz, German Rigau

In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision.

Domain Labelling

Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction

1 code implementation EMNLP 2021 Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, Eneko Agirre

In our experiments on TACRED we attain 63% F1 zero-shot, 69% with 16 examples per relation (17% points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data).

Natural Language Inference Relation +1

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark

1 code implementation27 Oct 2023 Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre

In this position paper, we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble.

Language Modelling Large Language Model +1

GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction

1 code implementation5 Oct 2023 Oscar Sainz, Iker García-Ferrero, Rodrigo Agerri, Oier Lopez de Lacalle, German Rigau, Eneko Agirre

In this paper, we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines.

 Ranked #1 on Zero-shot Named Entity Recognition (NER) on HarveyNER (using extra training data)

Event Argument Extraction Language Modelling +6

Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning

1 code implementation Findings (NAACL) 2022 Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, Eneko Agirre

In this work we show that entailment is also effective in Event Argument Extraction (EAE), reducing the need of manual annotation to 50% and 20% in ACE and WikiEvents respectively, while achieving the same performance as with full training.

Event Argument Extraction Natural Language Inference +2

ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations

2 code implementations NAACL (ACL) 2022 Oscar Sainz, Haoling Qiu, Oier Lopez de Lacalle, Eneko Agirre, Bonan Min

The current workflow for Information Extraction (IE) analysts involves the definition of the entities/relations of interest and a training corpus with annotated examples.

Natural Language Inference Zero-Shot Learning

Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction

1 code implementation8 Sep 2021 Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, Eneko Agirre

In our experiments on TACRED we attain 63% F1 zero-shot, 69% with 16 examples per relation (17% points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data).

Natural Language Inference Relation +1

Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language Models

1 code implementation7 Jan 2021 Oscar Sainz, German Rigau

In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision.

Domain Labelling

Domain Adapted Distant Supervision for Pedagogically Motivated Relation Extraction

no code implementations LREC 2020 Oscar Sainz, Oier Lopez de Lacalle, Itziar Aldabe, Montse Maritxalar

In this paper we present a relation extraction system that given a text extracts pedagogically motivated relation types, as a previous step to obtaining a semantic representation of the text which will make possible to automatically generate questions for reading comprehension.

Reading Comprehension Relation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.