no code implementations • 25 Jan 2024 • Mohammed Sabry, Anya Belz
We compare the performance of ported modules with that of equivalent modules trained (i) from scratch, and (ii) from parameters sampled from the same distribution as the ported module.
no code implementations • 24 Apr 2023 • Mohammed Sabry, Anya Belz
Recent parameter-efficient finetuning (PEFT) techniques aim to improve over the considerable cost of fully finetuning large pretrained language models (PLM).
1 code implementation • 8 Mar 2021 • Bonaventure F. P. Dossou, Mohammed Sabry
From Word2Vec to GloVe, word embedding models have played key roles in the current state-of-the-art results achieved in Natural Language Processing.
no code implementations • 14 Oct 2019 • Mohammed Sabry, Amr M. A. Khalifa
The breakthrough of deep Q-Learning on different types of environments revolutionized the algorithmic design of Reinforcement Learning to introduce more stable and robust algorithms, to that end many extensions to deep Q-Learning algorithm have been proposed to reduce the variance of the target values and the overestimation phenomena.