Search Results for author: Badr AlKhamissi

Found 16 papers, 5 papers with code

"Flex Tape Can't Fix That": Bias and Misinformation in Edited Language Models

no code implementations29 Feb 2024 Karina Halevy, Anna Sotnikova, Badr AlKhamissi, Syrielle Montariol, Antoine Bosselut

We introduce a novel benchmark dataset, Seesaw-CF, for measuring bias-related harms of model editing and conduct the first in-depth investigation of how different weight-editing methods impact model bias.

Misinformation Model Editing

Investigating Cultural Alignment of Large Language Models

1 code implementation20 Feb 2024 Badr AlKhamissi, Muhammad ElNokrashy, Mai AlKhamissi, Mona Diab

The intricate relationship between language and culture has long been a subject of exploration within the realm of linguistic anthropology.

Cross-Lingual Transfer

Partial Diacritization: A Context-Contrastive Inference Approach

no code implementations17 Jan 2024 Muhammad ElNokrashy, Badr AlKhamissi

In this light, we introduce Context-Contrastive Partial Diacritization (CCPD)--a novel approach to PD which integrates seamlessly with existing Arabic diacritization systems.

Instruction-tuning Aligns LLMs to the Human Brain

no code implementations1 Dec 2023 Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, Antoine Bosselut

To identify the factors underlying LLM-brain alignment, we compute correlations between the brain alignment of LLMs and various model properties, such as model size, various problem-solving abilities, and performance on tasks requiring world knowledge spanning various domains.

Natural Language Queries World Knowledge

Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models

1 code implementation28 Jun 2023 Zaid Alyafeai, Maged S. Alshaibani, Badr AlKhamissi, Hamzah Luqman, Ebrahim Alareqi, Ali Fadel

Large language models (LLMs) have demonstrated impressive performance on various downstream tasks without requiring fine-tuning, including ChatGPT, a chat-based model built on top of LLMs such as GPT-3. 5 and GPT-4.

Part-Of-Speech Tagging Sentiment Analysis +1

OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models

no code implementations19 May 2023 Badr AlKhamissi, Siddharth Verma, Ping Yu, Zhijing Jin, Asli Celikyilmaz, Mona Diab

Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations.

Depth-Wise Attention (DWAtt): A Layer Fusion Method for Data-Efficient Classification

no code implementations30 Sep 2022 Muhammad ElNokrashy, Badr AlKhamissi, Mona Diab

To test this, we propose a new layer fusion method: Depth-Wise Attention (DWAtt), to help re-surface signals from non-final layers.

NER

ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection

no code implementations25 May 2022 Badr AlKhamissi, Faisal Ladhak, Srini Iyer, Ves Stoyanov, Zornitsa Kozareva, Xian Li, Pascale Fung, Lambert Mathias, Asli Celikyilmaz, Mona Diab

Hate speech detection is complex; it relies on commonsense reasoning, knowledge of stereotypes, and an understanding of social nuance that differs from one culture to the next.

Cultural Vocal Bursts Intensity Prediction Few-Shot Learning +1

Meta AI at Arabic Hate Speech 2022: MultiTask Learning with Self-Correction for Hate Speech Classification

no code implementations OSACT (LREC) 2022 Badr AlKhamissi, Mona Diab

The tasks are to predict if a tweet contains (1) Offensive language; and whether it is considered (2) Hate Speech or not and if so, then predict the (3) Fine-Grained Hate Speech label from one of six categories.

Hate Speech Detection

A Review on Language Models as Knowledge Bases

no code implementations12 Apr 2022 Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, Marjan Ghazvininejad

Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs).

How to Learn and Represent Abstractions: An Investigation using Symbolic Alchemy

1 code implementation14 Dec 2021 Badr AlKhamissi, Akshay Srinivasan, Zeb-Kurth Nelson, Sam Ritter

Alchemy is a new meta-learning environment rich enough to contain interesting abstractions, yet simple enough to make fine-grained analysis tractable.

Meta-Learning

Deep Spiking Neural Networks with Resonate-and-Fire Neurons

no code implementations16 Sep 2021 Badr AlKhamissi, Muhammad ElNokrashy, David Bernal-Casas

In this work, we explore a new Spiking Neural Network (SNN) formulation with Resonate-and-Fire (RAF) neurons (Izhikevich, 2001) trained with gradient descent via back-propagation.

The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL

no code implementations ICLR Workshop Learning_to_Learn 2021 Badr AlKhamissi, Muhammad ElNokrashy, Michael Spranger

In this work, we analyze the reinstatement mechanism introduced by Ritter et al. (2018) to reveal two classes of neurons that emerge in the agent's working memory (an epLSTM cell) when trained using episodic meta-RL on an episodic variant of the Harlow visual fixation task.

Adapting MARBERT for Improved Arabic Dialect Identification: Submission to the NADI 2021 Shared Task

1 code implementation EACL (WANLP) 2021 Badr AlKhamissi, Mohamed Gabr, Muhammad ElNokrashy, Khaled Essam

Tasks are to identify the geographic origin of short Dialectal (DA) and Modern Standard Arabic (MSA) utterances at the levels of both country and province.

Dialect Identification

Cannot find the paper you are looking for? You can Submit a new open access paper.