Search Results for author: Iker García-Ferrero

Found 8 papers, 8 papers with code

GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction

1 code implementation5 Oct 2023 Oscar Sainz, Iker García-Ferrero, Rodrigo Agerri, Oier Lopez de Lacalle, German Rigau, Eneko Agirre

In this paper, we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines.

 Ranked #1 on Zero-shot Named Entity Recognition (NER) on HarveyNER (using extra training data)

Event Argument Extraction Language Modelling +6

Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings

4 code implementations23 Oct 2022 Iker García-Ferrero, Rodrigo Agerri, German Rigau

Zero-resource cross-lingual transfer approaches aim to apply supervised models from a source language to unlabelled target languages.

Cross-Lingual NER Machine Translation +1

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark

1 code implementation27 Oct 2023 Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre

In this position paper, we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble.

Language Modelling Large Language Model +1

T-Projection: High Quality Annotation Projection for Sequence Labeling Tasks

2 code implementations20 Dec 2022 Iker García-Ferrero, Rodrigo Agerri, German Rigau

In the absence of readily available labeled data for a given sequence labeling task and language, annotation projection has been proposed as one of the possible strategies to automatically generate annotated data.

 Ranked #1 on Cross-Lingual NER on MasakhaNER2.0 (Hausa metric)

Cross-Lingual NER Machine Translation +2

This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models

1 code implementation24 Oct 2023 Iker García-Ferrero, Begoña Altuna, Javier Álvez, Itziar Gonzalez-Dios, German Rigau

We have used our dataset with the largest available open LLMs in a zero-shot approach to grasp their generalization and inference capability and we have also fine-tuned some of the models to assess whether the understanding of negation can be trained.

Descriptive Negation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.