no code implementations • 23 Oct 2023 • Sabri Boughorbel, Majd Hawasly
While significant progress has been made in benchmarking Large Language Models (LLMs) across various tasks, there is a lack of comprehensive evaluation of their abilities in responding to multi-turn instructions in less-commonly tested languages like Arabic.
1 code implementation • 9 Aug 2023 • Fahim Dalvi, Maram Hasanain, Sabri Boughorbel, Basel Mousi, Samir Abdaljalil, Nizi Nazar, Ahmed Abdelali, Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Ali, Majd Hawasly, Nadir Durrani, Firoj Alam
In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language.
no code implementations • 24 May 2023 • Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram Hasanain, Basel Mousi, Sabri Boughorbel, Yassine El Kheir, Daniel Izham, Fahim Dalvi, Majd Hawasly, Nizi Nazar, Yousseif Elshahawy, Ahmed Ali, Nadir Durrani, Natasa Milic-Frayling, Firoj Alam
Our findings provide valuable insights into the applicability of LLMs for Arabic NLP and speech processing tasks.
no code implementations • 3 Apr 2023 • Sabri Boughorbel, Fethi Jarray, Abdulaziz Al Homaid, Rashid Niaz, Khalid Alyafei
In the experimental analysis, we show that mutli-modality improves the prediction performance compared with models trained solely on text or vital signs.
no code implementations • 14 Nov 2021 • Sharmita Dey, Sabri Boughorbel, Arndt F. Schilling
Control strategies for active prostheses or orthoses use sensor inputs to recognize the user's locomotive intention and generate corresponding control commands for producing the desired locomotion.
no code implementations • 6 Mar 2021 • Sabri Boughorbel, Fethi Jarray, Abdou Kadri
In this wo rk, we are interested in developing deep learning models for no-show prediction based on tabular data while ensuring fairness properties.
no code implementations • 27 Oct 2019 • Sabri Boughorbel, Fethi Jarray, Neethu Venugopal, Shabir Moosa, Haithum Elhadi, Michel Makhlouf
We propose a new FL model called Federated Uncertainty-Aware Learning Algorithm (FUALA) that improves on Federated Averaging (FedAvg) in the context of EHR.
no code implementations • 24 Nov 2018 • Sabri Boughorbel, Fethi Jarray, Neethu Venugopal, Haithum Elhadi
The network is alternately trained on epochs with the clean dataset with a simple cross-entropy loss and on next epoch with the noisy dataset and a loss corrected with the estimated corruption matrix.