Search Results for author: Ahmed M. Abdelmoniem

Found 12 papers, 5 papers with code

Decentralised Moderation for Interoperable Social Networks: A Conversation-based Approach for Pleroma and the Fediverse

1 code implementation3 Apr 2024 Vibhor Agarwal, Aravindh Raman, Nishanth Sastry, Ahmed M. Abdelmoniem, Gareth Tyson, Ignacio Castro

Recent work has exploited the conversational context of a post to improve this automatic tagging, e. g. using the replies to a post to help classify if it contains toxic speech.

TAG

Flashback: Understanding and Mitigating Forgetting in Federated Learning

no code implementations8 Feb 2024 Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth

In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.

Federated Learning

An Empirical Study of Efficiency and Privacy of Federated Learning Algorithms

no code implementations24 Dec 2023 Sofia Zahri, Hajar Bennouri, Ahmed M. Abdelmoniem

This paper showcases two illustrative scenarios that highlight the potential of federated learning (FL) as a key to delivering efficient and privacy-preserving machine learning within IoT networks.

Federated Learning Privacy Preserving

Stock Market Price Prediction: A Hybrid LSTM and Sequential Self-Attention based Approach

no code implementations7 Aug 2023 Karan Pardeshi, Sukhpal Singh Gill, Ahmed M. Abdelmoniem

In this paper, our aim is to focus on the second aspect and build a model that predicts future prices with minimal errors.

A Meta-learning based Stacked Regression Approach for Customer Lifetime Value Prediction

no code implementations7 Aug 2023 Karan Gadgil, Sukhpal Singh Gill, Ahmed M. Abdelmoniem

Companies across the globe are keen on targeting potential high-value customers in an attempt to expand revenue and this could be achieved only by understanding the customers more.

Meta-Learning regression +1

Leveraging The Edge-to-Cloud Continuum for Scalable Machine Learning on Decentralized Data

no code implementations19 Jun 2023 Ahmed M. Abdelmoniem

With mobile, IoT and sensor devices becoming pervasive in our life and recent advances in Edge Computational Intelligence (e. g., Edge AI/ML), it became evident that the traditional methods for training AI/ML models are becoming obsolete, especially with the growing concerns over privacy and security.

Towards Energy-Aware Federated Learning on Battery-Powered Clients

1 code implementation9 Aug 2022 Amna Arouj, Ahmed M. Abdelmoniem

To address this issue, we develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.

Fairness Federated Learning

Resource-Efficient Federated Learning

1 code implementation1 Nov 2021 Ahmed M. Abdelmoniem, Atal Narayan Sahu, Marco Canini, Suhaib A. Fahmy

Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication.

Fairness Federated Learning

Rethinking gradient sparsification as total error minimization

no code implementations NeurIPS 2021 Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis

Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.

On the Impact of Device and Behavioral Heterogeneity in Federated Learning

no code implementations15 Feb 2021 Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, Marco Canini

Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.

Fairness Federated Learning

An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems

1 code implementation26 Jan 2021 Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alouini, Marco Canini

Many proposals exploit the compressibility of the gradients and propose lossy compression techniques to speed up the communication stage of distributed training.

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.