Search Results for author: Muhy Eddin Za'ter

Found 6 papers, 2 papers with code

Arabic Text-To-Speech (TTS) Data Preparation

no code implementations7 Apr 2022 Hala Al Masri, Muhy Eddin Za'ter

The purpose of this work is to offer light on how ground-truth utterances may influence the evolution of speech systems in terms of naturalness, intelligibility, and understanding.

Online Gradient Descent for Flexible Power Point Tracking Under a Highly Fluctuating Weather and Load

1 code implementation1 Mar 2022 Muhy Eddin Za'ter, Abdallah Adwan

The increasing electricity demand and the need for clean and renewable energy resources to satisfy this demand in a cost-effective manner, imposes new challenges on researchers and developers to maximize the output of these renewable resources at all times.

Bench-Marking And Improving Arabic Automatic Image Captioning Through The Use Of Multi-Task Learning Paradigm

no code implementations11 Feb 2022 Muhy Eddin Za'ter, Bashar Talafha

The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language.

Image Captioning Multi-Task Learning +1

SPARTA: Speaker Profiling for ARabic TAlk

no code implementations13 Dec 2020 Wael Farhan, Muhy Eddin Za'ter, Qusai Abu Obaidah, Hisham al Bataineh, Zyad Sober, Hussein T. Al-Natsheh

LSTM and CNN networks were implemented using raw features: MFCC and MEL, where FCNN was explored on the pre-trained vectors while varying the hyper-parameters of these networks to obtain the best results for each dataset and task.

Multi-Task Learning Speaker Profiling +1

Multi-Dialect Arabic BERT for Country-Level Dialect Identification

1 code implementation COLING (WANLP) 2020 Bashar Talafha, Mohammad Ali, Muhy Eddin Za'ter, Haitham Seelawi, Ibraheem Tuffaha, Mostafa Samir, Wael Farhan, Hussein T. Al-Natsheh

Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26. 78% on the subtask at hand.

Dialect Identification Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.