Search Results for author: Mohammed Sabry

Found 4 papers, 1 papers with code

Assessing the Portability of Parameter Matrices Trained by Parameter-Efficient Finetuning Methods

no code implementations25 Jan 2024 Mohammed Sabry, Anya Belz

We compare the performance of ported modules with that of equivalent modules trained (i) from scratch, and (ii) from parameters sampled from the same distribution as the ported module.

Sentiment Analysis Transfer Learning

PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques

no code implementations24 Apr 2023 Mohammed Sabry, Anya Belz

Recent parameter-efficient finetuning (PEFT) techniques aim to improve over the considerable cost of fully finetuning large pretrained language models (PLM).

AfriVEC: Word Embedding Models for African Languages. Case Study of Fon and Nobiin

1 code implementation8 Mar 2021 Bonaventure F. P. Dossou, Mohammed Sabry

From Word2Vec to GloVe, word embedding models have played key roles in the current state-of-the-art results achieved in Natural Language Processing.

Transfer Learning

On the Reduction of Variance and Overestimation of Deep Q-Learning

no code implementations14 Oct 2019 Mohammed Sabry, Amr M. A. Khalifa

The breakthrough of deep Q-Learning on different types of environments revolutionized the algorithmic design of Reinforcement Learning to introduce more stable and robust algorithms, to that end many extensions to deep Q-Learning algorithm have been proposed to reduce the variance of the target values and the overestimation phenomena.

Q-Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.