Search Results for author: Eneko Agirre

Found 95 papers, 31 papers with code

Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction

1 code implementation EMNLP 2021 Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, Eneko Agirre

In our experiments on TACRED we attain 63% F1 zero-shot, 69% with 16 examples per relation (17% points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data).

Natural Language Inference Relation +1

Event Extraction in Basque: Typologically motivated Cross-Lingual Transfer-Learning Analysis

no code implementations9 Apr 2024 Mikel Zubillaga, Oscar Sainz, Ainara Estarrona, Oier Lopez de Lacalle, Eneko Agirre

To perform the experiments we introduce EusIE, an event extraction dataset for Basque, which follows the Multilingual Event Extraction dataset (MEE).

Cross-Lingual Transfer Event Extraction +4

Grounding Spatial Relations in Text-Only Language Models

1 code implementation20 Mar 2024 Gorka Azkune, Ander Salaberria, Eneko Agirre

This paper shows that text-only Language Models (LM) can learn to ground spatial relations like "left of" or "below" if they are provided with explicit location information of objects and they are properly trained to leverage those locations.

PixT3: Pixel-based Table To Text generation

no code implementations16 Nov 2023 Iñigo Alonso, Eneko Agirre, Mirella Lapata

Table-to-text generation involves generating appropriate textual descriptions given structured tabular data.

Data-to-Text Generation Self-Supervised Learning +1

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark

1 code implementation27 Oct 2023 Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre

In this position paper, we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble.

Language Modelling Large Language Model +1

Automatic Logical Forms improve fidelity in Table-to-Text generation

1 code implementation26 Oct 2023 Iñigo Alonso, Eneko Agirre

Table-to-text systems generate natural language statements from structured data like tables.

Table-to-Text Generation

Unsupervised Domain Adaption for Neural Information Retrieval

no code implementations13 Oct 2023 Carlos Dominguez, Jon Ander Campos, Eneko Agirre, Gorka Azkune

We focus on the BEIR benchmark, which includes test datasets from several domains with no training data, and explore two scenarios: zero-shot, where the supervised system is trained in a large out-of-domain dataset (MS-MARCO); and unsupervised domain adaptation, where, in addition to MS-MARCO, the system is fine-tuned in synthetic data from the target domain.

Information Retrieval Retrieval +1

GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction

1 code implementation5 Oct 2023 Oscar Sainz, Iker García-Ferrero, Rodrigo Agerri, Oier Lopez de Lacalle, German Rigau, Eneko Agirre

In this paper, we propose GoLLIE (Guideline-following Large Language Model for IE), a model able to improve zero-shot results on unseen IE tasks by virtue of being fine-tuned to comply with annotation guidelines.

 Ranked #1 on Zero-shot Named Entity Recognition (NER) on HarveyNER (using extra training data)

Event Argument Extraction Language Modelling +6

CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models

no code implementations23 May 2023 Aitor Ormazabal, Mikel Artetxe, Eneko Agirre

Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters.

Machine Translation

Lessons learned from the evaluation of Spanish Language Models

1 code implementation16 Dec 2022 Rodrigo Agerri, Eneko Agirre

Given the impact of language models on the field of Natural Language Processing, a number of Spanish encoder-only masked language models (aka BERTs) have been trained and released.

PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation

1 code implementation24 May 2022 Aitor Ormazabal, Mikel Artetxe, Manex Agirrezabal, Aitor Soroa, Eneko Agirre

During inference, we build control codes for the desired meter and rhyme scheme, and condition our language model on them to generate formal verse poetry.

Language Modelling valid

Principled Paraphrase Generation with Parallel Corpora

1 code implementation ACL 2022 Aitor Ormazabal, Mikel Artetxe, Aitor Soroa, Gorka Labaka, Eneko Agirre

Round-trip Machine Translation (MT) is a popular choice for paraphrase generation, which leverages readily available parallel corpora for supervision.

Machine Translation Paraphrase Generation +1

Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning

1 code implementation Findings (NAACL) 2022 Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, Eneko Agirre

In this work we show that entailment is also effective in Event Argument Extraction (EAE), reducing the need of manual annotation to 50% and 20% in ACE and WikiEvents respectively, while achieving the same performance as with full training.

Event Argument Extraction Natural Language Inference +2

ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations

2 code implementations NAACL (ACL) 2022 Oscar Sainz, Haoling Qiu, Oier Lopez de Lacalle, Eneko Agirre, Bonan Min

The current workflow for Information Extraction (IE) analysts involves the definition of the entities/relations of interest and a training corpus with annotated examples.

Natural Language Inference Zero-Shot Learning

Image Captioning for Effective Use of Language Models in Knowledge-Based Visual Question Answering

1 code implementation15 Sep 2021 Ander Salaberria, Gorka Azkune, Oier Lopez de Lacalle, Aitor Soroa, Eneko Agirre

Our results on a visual question answering task which requires external knowledge (OK-VQA) show that our text-only model outperforms pretrained multimodal (image-text) models of comparable number of parameters.

Image Captioning Knowledge Graphs +3

Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction

1 code implementation8 Sep 2021 Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, Eneko Agirre

In our experiments on TACRED we attain 63% F1 zero-shot, 69% with 16 examples per relation (17% points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data).

Natural Language Inference Relation +1

Inferring spatial relations from textual descriptions of images

1 code implementation1 Feb 2021 Aitzol Elu, Gorka Azkune, Oier Lopez de Lacalle, Ignacio Arganda-Carreras, Aitor Soroa, Eneko Agirre

Previous work did not use the caption text information, but a manually provided relation holding between the subject and the object.

Common Sense Reasoning Object +1

Improving Conversational Question Answering Systems after Deployment using Feedback-Weighted Learning

1 code implementation COLING 2020 Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Gorka Azkune, Eneko Agirre

The interaction of conversational systems with users poses an exciting opportunity for improving them after deployment, but little evidence has been provided of its feasibility.

Conversational Question Answering Document Classification

Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque

no code implementations LREC 2020 Arantxa Otegi, Aitor Agirre, Jon Ander Campos, Aitor Soroa, Eneko Agirre

Conversational Question Answering (CQA) systems meet user information needs by having conversations with them, where answers to the questions are retrieved from text.

Conversational Question Answering Cross-Lingual Transfer

A Call for More Rigor in Unsupervised Cross-lingual Learning

no code implementations ACL 2020 Mikel Artetxe, Sebastian Ruder, Dani Yogatama, Gorka Labaka, Eneko Agirre

We review motivations, definition, approaches, and methodology for unsupervised cross-lingual learning and call for a more rigorous position in each of them.

Cross-Lingual Word Embeddings Position +3

Translation Artifacts in Cross-lingual Transfer Learning

1 code implementation EMNLP 2020 Mikel Artetxe, Gorka Labaka, Eneko Agirre

Both human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique.

Cross-Lingual Transfer Machine Translation +3

Evaluating Multimodal Representations on Visual Semantic Textual Similarity

1 code implementation4 Apr 2020 Oier Lopez de Lacalle, Ander Salaberria, Aitor Soroa, Gorka Azkune, Eneko Agirre

In the case of textual representations, inference tasks such as Textual Entailment and Semantic Textual Similarity have been often used to benchmark the quality of textual representations.

Benchmarking Image Captioning +4

Bilingual Lexicon Induction through Unsupervised Machine Translation

1 code implementation ACL 2019 Mikel Artetxe, Gorka Labaka, Eneko Agirre

A recent research line has obtained strong results on bilingual lexicon induction by aligning independently trained word embeddings in two languages and using the resulting cross-lingual embeddings to induce word translation pairs through nearest neighbor or related retrieval methods.

Bilingual Lexicon Induction Language Modelling +6

Analyzing the Limitations of Cross-lingual Word Embedding Mappings

no code implementations ACL 2019 Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, Eneko Agirre

Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations.

Bilingual Lexicon Induction Cross-Lingual Word Embeddings +1

Survey on Evaluation Methods for Dialogue Systems

no code implementations10 May 2019 Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, Mark Cieliebak

We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class.

Question Answering Task-Oriented Dialogue Systems

An Effective Approach to Unsupervised Machine Translation

1 code implementation ACL 2019 Mikel Artetxe, Gorka Labaka, Eneko Agirre

While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only.

NMT Translation +1

Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation

2 code implementations CONLL 2018 Mikel Artetxe, Gorka Labaka, Iñigo Lopez-Gazpio, Eneko Agirre

Following the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/syntax and similarity/relatedness.

Word Embeddings

Unsupervised Statistical Machine Translation

3 code implementations EMNLP 2018 Mikel Artetxe, Gorka Labaka, Eneko Agirre

While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018).

Language Modelling NMT +2

A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings

2 code implementations ACL 2018 Mikel Artetxe, Gorka Labaka, Eneko Agirre

Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.

Cross-Lingual Word Embeddings Self-Learning +1

Unsupervised Neural Machine Translation

2 code implementations ICLR 2018 Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho

In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs.

NMT Translation +1

Learning bilingual word embeddings with (almost) no bilingual data

no code implementations ACL 2017 Mikel Artetxe, Gorka Labaka, Eneko Agirre

Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs.

Document Classification Entity Linking +5

Improving Translation Selection with Supersenses

no code implementations COLING 2016 Haiqing Tang, Deyi Xiong, Oier Lopez de Lacalle, Eneko Agirre

Selecting appropriate translations for source words with multiple meanings still remains a challenge for statistical machine translation (SMT).

Machine Translation Translation +1

A comparison of Named-Entity Disambiguation and Word Sense Disambiguation

no code implementations LREC 2016 Angel Chang, Valentin I. Spitkovsky, Christopher D. Manning, Eneko Agirre

Named Entity Disambiguation (NED) is the task of linking a named-entity mention to an instance in a knowledge-base, typically Wikipedia-derived resources like DBpedia.

Entity Disambiguation Word Sense Disambiguation

Evaluating Translation Quality and CLIR Performance of Query Sessions

no code implementations LREC 2016 Xabier Saralegi, Eneko Agirre, I{\~n}aki Alegria

Translation quality improved in all three types (generalization, specification, and drifting), and CLIR improved for generalization and specification sessions, preserving the performance in drifting sessions.

Cross-Lingual Information Retrieval Retrieval +1

Word Sense-Aware Machine Translation: Including Senses as Contextual Features for Improved Translation Models

no code implementations LREC 2016 Steven Neale, Lu{\'\i}s Gomes, Eneko Agirre, Oier Lopez de Lacalle, Ant{\'o}nio Branco

Although it is commonly assumed that word sense disambiguation (WSD) should help to improve lexical choice and improve the quality of machine translation systems, how to successfully integrate word senses into such systems remains an unanswered question.

Machine Translation Translation +1

Addressing the MFS Bias in WSD systems

no code implementations LREC 2016 Marten Postma, Ruben Izquierdo, Eneko Agirre, German Rigau, Piek Vossen

Word Sense Disambiguation (WSD) systems tend to have a strong bias towards assigning the Most Frequent Sense (MFS), which results in high performance on the MFS but in a very low performance on the less frequent senses.

Word Sense Disambiguation

QTLeap WSD/NED Corpora: Semantic Annotation of Parallel Corpora in Six Languages

no code implementations LREC 2016 Arantxa Otegi, Nora Aranberri, Antonio Branco, Jan Haji{\v{c}}, Martin Popel, Kiril Simov, Eneko Agirre, Petya Osenova, Rita Pereira, Jo{\~a}o Silva, Steven Neale

This work presents parallel corpora automatically annotated with several NLP tools, including lemma and part-of-speech tagging, named-entity recognition and classification, named-entity disambiguation, word-sense disambiguation, and coreference.

Cross-Lingual Transfer Entity Disambiguation +9

Evaluating the word-expert approach for Named-Entity Disambiguation

no code implementations15 Mar 2016 Angel X. Chang, Valentin I. Spitkovsky, Christopher D. Manning, Eneko Agirre

Named Entity Disambiguation (NED) is the task of linking a named-entity mention to an instance in a knowledge-base, typically Wikipedia.

Entity Disambiguation Word Sense Disambiguation

Improving distant supervision using inference learning

no code implementations IJCNLP 2015 Roland Roller, Eneko Agirre, Aitor Soroa, Mark Stevenson

Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort.

Relation Relation Extraction

Studying the Wikipedia Hyperlink Graph for Relatedness and Disambiguation

1 code implementation5 Mar 2015 Eneko Agirre, Ander Barrena, Aitor Soroa

Hyperlinks and other relations in Wikipedia are a extraordinary resource which is still not fully understood.

Entity Disambiguation

Matching Cultural Heritage items to Wikipedia

no code implementations LREC 2012 Eneko Agirre, Ander Barrena, Oier Lopez de Lacalle, Aitor Soroa, Fern, Samuel o, Mark Stevenson

Digitised Cultural Heritage (CH) items usually have short descriptions and lack rich contextual information.

Entity Linking

Cannot find the paper you are looking for? You can Submit a new open access paper.