Search Results for author: Mohamed Maher

Found 6 papers, 3 papers with code

Language Tokens: Simply Improving Zero-Shot Multi-Aligned Translation in Encoder-Decoder Models

no code implementations AMTA 2022 Muhammad N ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan

In a WMT-based setting, we see 1. 3 and 0. 4 BLEU points improvement for the zero-shot setting, and when using direct data for training, respectively, while from-English performance improves by 4. 17 and 0. 85 BLEU points.

Translation

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

no code implementations11 Aug 2022 Muhammad ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan Awadalla

In a WMT evaluation campaign, From-English performance improves by 4. 17 and 2. 87 BLEU points, in the zero-shot setting, and when direct data is available for training, respectively.

Translation

AutoMLBench: A Comprehensive Experimental Evaluation of Automated Machine Learning Frameworks

no code implementations18 Apr 2022 Hassan Eldeeb, Mohamed Maher, Radwa Elshawi, Sherif Sakr

With the booming demand for machine learning applications, it has been recognized that the number of knowledgeable data scientists can not scale with the growing data volumes and application needs in our digital world.

AutoML BIG-bench Machine Learning +1

Automated Machine Learning: State-of-The-Art and Open Challenges

1 code implementation5 Jun 2019 Radwa Elshawi, Mohamed Maher, Sherif Sakr

Furthermore, we provide comprehensive coverage for the various tools and frameworks that have been introduced in this domain.

AutoML BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.