Search Results for author: Vladimir Araujo

Found 18 papers, 8 papers with code

Sequence-to-Sequence Spanish Pre-trained Language Models

1 code implementation20 Sep 2023 Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufiño, Marie-Francine Moens

In recent years, substantial advancements in pre-trained language models have paved the way for the development of numerous non-English language versions, with a particular focus on encoder-only and decoder-only architectures.

Generative Question Answering Natural Language Understanding

A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information

no code implementations12 May 2023 Vladimir Araujo, Alvaro Soto, Marie-Francine Moens

Existing question answering methods often assume that the input content (e. g., documents or videos) is always accessible to solve the task.

Memorization Question Answering

How Relevant is Selective Memory Population in Lifelong Language Learning?

no code implementations3 Oct 2022 Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, Marie-Francine Moens

Lifelong language learning seeks to have models continuously learn multiple tasks in a sequential order without suffering from catastrophic forgetting.

Question Answering text-classification +1

Memory Population in Continual Learning via Outlier Elimination

1 code implementation4 Jul 2022 Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu

This paper introduces Memory Outlier Elimination (MOE), a method for identifying and eliminating outliers in the memory buffer by choosing samples from label-homogeneous subpopulations.

Continual Learning

ALBETO and DistilBETO: Lightweight Spanish Language Models

2 code implementations LREC 2022 José Cañete, Sebastián Donoso, Felipe Bravo-Marquez, Andrés Carvallo, Vladimir Araujo

In this paper we present ALBETO and DistilBETO, which are versions of ALBERT and DistilBERT pre-trained exclusively on Spanish corpora.

Natural Language Understanding NER +1

Entropy-based Stability-Plasticity for Lifelong Learning

1 code implementation18 Apr 2022 Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens

The ability to continuously learn remains elusive for deep learning models.

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

no code implementations EMNLP 2021 Vladimir Araujo, Andrés Villa, Marcelo Mendoza, Marie-Francine Moens, Alvaro Soto

Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level.

Relationship Detection Sentence

Stress Test Evaluation of Biomedical Word Embeddings

1 code implementation NAACL (BioNLP) 2021 Vladimir Araujo, Andrés Carvallo, Carlos Aspillaga, Camilo Thorne, Denis Parra

The success of pretrained word embeddings has motivated their use in the biomedical domain, with contextualized embeddings yielding remarkable results in several biomedical NLP tasks.

named-entity-recognition Named Entity Recognition +2

Interpretable Contextual Team-aware Item Recommendation: Application in Multiplayer Online Battle Arena Games

1 code implementation30 Jul 2020 Andrés Villa, Vladimir Araujo, Francisca Cattan, Denis Parra

Our evaluation indicates that both the Transformer architecture and the contextual information are essential to get the best results for this item recommendation task.

Recommendation Systems

Translating Natural Language Instructions for Behavioral Robot Navigation with a Multi-Head Attention Mechanism

no code implementations WS 2020 Patricio Cerda-Mardini, Vladimir Araujo, Alvaro Soto

We propose a multi-head attention mechanism as a blending layer in a neural network model that translates natural language to a high level behavioral language for indoor robot navigation.

Robot Navigation

On Adversarial Examples for Biomedical NLP Tasks

no code implementations23 Apr 2020 Vladimir Araujo, Andres Carvallo, Carlos Aspillaga, Denis Parra

We also show that we can significantly improve the robustness of the models by training them with adversarial examples.

Language Modelling named-entity-recognition +5

Cannot find the paper you are looking for? You can Submit a new open access paper.