Search Results for author: Jesujoba O. Alabi

Found 12 papers, 7 papers with code

Few-Shot Pidgin Text Adaptation via Contrastive Fine-Tuning

no code implementations COLING 2022 Ernie Chang, Jesujoba O. Alabi, David Ifeoluwa Adelani, Vera Demberg

The surging demand for multilingual dialogue systems often requires a costly labeling process for each language addition.

Text Generation

The Hidden Space of Transformer Language Adapters

no code implementations20 Feb 2024 Jesujoba O. Alabi, Marius Mosbach, Matan Eyal, Dietrich Klakow, Mor Geva

We analyze the operation of transformer language adapters, which are small modules trained on top of a frozen language model to adapt its predictions to new target languages.

Language Modelling

The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis

no code implementations20 Feb 2024 Miaoran Zhang, Vagrant Gautam, Mingyang Wang, Jesujoba O. Alabi, Xiaoyu Shen, Dietrich Klakow, Marius Mosbach

Compared to work on monolingual (English) in-context learning, multilingual in-context learning is under-explored, and we lack an in-depth understanding of the role of demonstrations in this context.

In-Context Learning

SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects

2 code implementations14 Sep 2023 David Ifeoluwa Adelani, Hannah Liu, Xiaoyu Shen, Nikita Vassilyev, Jesujoba O. Alabi, Yanke Mao, Haonan Gao, Annie En-Shiun Lee

Despite the progress we have recorded in the last few years in multilingual natural language processing, evaluation is typically limited to a small set of languages with available datasets which excludes a large number of low-resource languages.

Cross-Lingual Transfer Language Modelling +5

YORC: Yoruba Reading Comprehension dataset

no code implementations18 Aug 2023 Anuoluwapo Aremu, Jesujoba O. Alabi, David Ifeoluwa Adelani

In this paper, we create YORC: a new multi-choice Yoruba Reading Comprehension dataset that is based on Yoruba high-school reading comprehension examination.

Cross-Lingual Transfer Reading Comprehension

Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-Tuning

1 code implementation COLING 2022 Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, Dietrich Klakow

Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages.

NER Sentiment Analysis +5

Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yorùbá and Twi

1 code implementation5 Dec 2019 Jesujoba O. Alabi, Kwabena Amponsah-Kaakyire, David I. Adelani, Cristina España-Bonet

In this paper we focus on two African languages, Yor\`ub\'a and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.