no code implementations • 20 Sep 2023 • Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufiño, Marie-Francine Moens
In recent years, substantial advancements in pre-trained language models have paved the way for the development of numerous non-English language versions, with a particular focus on encoder-only and decoder-only architectures.
Generative Question Answering
Natural Language Understanding
no code implementations • 12 May 2023 • Vladimir Araujo, Alvaro Soto, Marie-Francine Moens
Existing question answering methods often assume that the input content (e. g., documents or videos) is always accessible to solve the task.
no code implementations • 3 Oct 2022 • Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
Lifelong language learning seeks to have models continuously learn multiple tasks in a sequential order without suffering from catastrophic forgetting.
no code implementations • 4 Jul 2022 • Julio Hurtado, Alain Raymond-Saez, Vladimir Araujo, Vincenzo Lomonaco, Alvaro Soto, Davide Bacciu
Based on these insights, we propose CAWS (Consistency AWare Sampling), an original storage policy that leverages a learning consistency score (C-Score) to populate the memory with elements that are easy to learn and representative of previous tasks.
2 code implementations • LREC 2022 • José Cañete, Sebastián Donoso, Felipe Bravo-Marquez, Andrés Carvallo, Vladimir Araujo
In this paper we present ALBETO and DistilBETO, which are versions of ALBERT and DistilBERT pre-trained exclusively on Spanish corpora.
1 code implementation • 18 Apr 2022 • Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens
The ability to continuously learn remains elusive for deep learning models.
1 code implementation • LREC 2022 • Vladimir Araujo, Andrés Carvallo, Souvik Kundu, José Cañete, Marcelo Mendoza, Robert E. Mercer, Felipe Bravo-Marquez, Marie-Francine Moens, Alvaro Soto
Due to the success of pre-trained language models, versions of languages other than English have been released in recent years.
no code implementations • nlppower (ACL) 2022 • Cristóbal Eyzaguirre, Felipe del Río, Vladimir Araujo, Álvaro Soto
Large-scale pre-trained language models have shown remarkable results in diverse NLP applications.
no code implementations • EMNLP 2021 • Vladimir Araujo, Andrés Villa, Marcelo Mendoza, Marie-Francine Moens, Alvaro Soto
Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level.
1 code implementation • NAACL (BioNLP) 2021 • Vladimir Araujo, Andrés Carvallo, Carlos Aspillaga, Camilo Thorne, Denis Parra
The success of pretrained word embeddings has motivated their use in the biomedical domain, with contextualized embeddings yielding remarkable results in several biomedical NLP tasks.
1 code implementation • 21 Jun 2021 • Andrés Villa, Juan-Manuel Perez-Rua, Vladimir Araujo, Juan Carlos Niebles, Victor Escorcia, Alvaro Soto
Recently, few-shot learning has received increasing interest.
no code implementations • 1 Jan 2021 • Cristobal Eyzaguirre, Felipe del Rio, Vladimir Araujo, Alvaro Soto
DACT-BERT adds an adaptive computation mechanism to the regular processing pipeline of BERT.
1 code implementation • 30 Jul 2020 • Andrés Villa, Vladimir Araujo, Francisca Cattan, Denis Parra
Our evaluation indicates that both the Transformer architecture and the contextual information are essential to get the best results for this item recommendation task.
no code implementations • WS 2020 • Vladimir Araujo, Andr{\'e}s Carvallo, Denis Parra
The success of pre-trained word embeddings of the BERT model has motivated its use in tasks in the biomedical domain.
no code implementations • WS 2020 • Patricio Cerda-Mardini, Vladimir Araujo, Alvaro Soto
We propose a multi-head attention mechanism as a blending layer in a neural network model that translates natural language to a high level behavioral language for indoor robot navigation.
no code implementations • 23 Apr 2020 • Vladimir Araujo, Andres Carvallo, Carlos Aspillaga, Denis Parra
We also show that we can significantly improve the robustness of the models by training them with adversarial examples.
no code implementations • LREC 2020 • Carlos Aspillaga, Andrés Carvallo, Vladimir Araujo
There has been significant progress in recent years in the field of Natural Language Processing thanks to the introduction of the Transformer architecture.
Natural Language Inference
Natural Language Understanding
+1