Search Results for author: Ahmed M. Abdelmoniem

Found 19 papers, 6 papers with code

AdRo-FL: Informed and Secure Client Selection for Federated Learning in the Presence of Adversarial Aggregator

no code implementations21 Jun 2025 Md. Kamrul Hossain, Walid Aljoby, Anis Elgabli, Ahmed M. Abdelmoniem, Khaled A. Harras

For the distributed setting, we design a two-phase selection protocol: first, the aggregator selects the top clients based on our utility-driven ranking; then, a verifiable random function (VRF) ensures a BSA-resistant final selection.

Federated Learning Privacy Preserving

Benchmarking Mutual Information-based Loss Functions in Federated Learning

no code implementations16 Apr 2025 Sarang S, Harsh D. Chothani, Qilei Li, Ahmed M. Abdelmoniem, Arnab K. Paul

Federated Learning (FL) has attracted considerable interest due to growing privacy concerns and regulations like the General Data Protection Regulation (GDPR), which stresses the importance of privacy-preserving and fair machine learning approaches.

Benchmarking Fairness +2

Knowledge Augmentation in Federation: Rethinking What Collaborative Learning Can Bring Back to Decentralized Data

no code implementations5 Mar 2025 Wentai Wu, Ligang He, Saiqin Long, Ahmed M. Abdelmoniem, Yingliang Wu, Rui Mao

Data, as an observable form of knowledge, has become one of the most important factors of production for the development of Artificial Intelligence (AI).

Fairness Federated Learning +1

Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense

no code implementations5 Aug 2024 Qilei Li, Ahmed M. Abdelmoniem

However, FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning, which can deteriorate the performance of aggregated global model.

Data Poisoning Federated Learning +1

Federated Knowledge Transfer Fine-tuning Large Server Model with Resource-Constrained IoT Clients

no code implementations7 Jul 2024 Shaoyuan Chen, Linlin You, Rui Liu, Shuo Yu, Ahmed M. Abdelmoniem

Compared to the solutions based on centralized data centers, updating large models in the Internet of Things (IoT) faces challenges in coordinating knowledge from distributed clients by using their private and heterogeneous data.

Federated Learning Knowledge Distillation +2

Decentralised Moderation for Interoperable Social Networks: A Conversation-based Approach for Pleroma and the Fediverse

1 code implementation3 Apr 2024 Vibhor Agarwal, Aravindh Raman, Nishanth Sastry, Ahmed M. Abdelmoniem, Gareth Tyson, Ignacio Castro

Recent work has exploited the conversational context of a post to improve this automatic tagging, e. g. using the replies to a post to help classify if it contains toxic speech.

TAG

Flashback: Understanding and Mitigating Forgetting in Federated Learning

no code implementations8 Feb 2024 Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth

In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.

Federated Learning

An Empirical Study of Efficiency and Privacy of Federated Learning Algorithms

no code implementations24 Dec 2023 Sofia Zahri, Hajar Bennouri, Ahmed M. Abdelmoniem

This paper showcases two illustrative scenarios that highlight the potential of federated learning (FL) as a key to delivering efficient and privacy-preserving machine learning within IoT networks.

Federated Learning Privacy Preserving

Stock Market Price Prediction: A Hybrid LSTM and Sequential Self-Attention based Approach

no code implementations7 Aug 2023 Karan Pardeshi, Sukhpal Singh Gill, Ahmed M. Abdelmoniem

In this paper, our aim is to focus on the second aspect and build a model that predicts future prices with minimal errors.

A Meta-learning based Stacked Regression Approach for Customer Lifetime Value Prediction

no code implementations7 Aug 2023 Karan Gadgil, Sukhpal Singh Gill, Ahmed M. Abdelmoniem

Companies across the globe are keen on targeting potential high-value customers in an attempt to expand revenue and this could be achieved only by understanding the customers more.

Meta-Learning regression +1

Leveraging The Edge-to-Cloud Continuum for Scalable Machine Learning on Decentralized Data

no code implementations19 Jun 2023 Ahmed M. Abdelmoniem

With mobile, IoT and sensor devices becoming pervasive in our life and recent advances in Edge Computational Intelligence (e. g., Edge AI/ML), it became evident that the traditional methods for training AI/ML models are becoming obsolete, especially with the growing concerns over privacy and security.

Towards Energy-Aware Federated Learning on Battery-Powered Clients

1 code implementation9 Aug 2022 Amna Arouj, Ahmed M. Abdelmoniem

To address this issue, we develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices.

Fairness Federated Learning

Resource-Efficient Federated Learning

1 code implementation1 Nov 2021 Ahmed M. Abdelmoniem, Atal Narayan Sahu, Marco Canini, Suhaib A. Fahmy

Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication.

Fairness Federated Learning

Rethinking gradient sparsification as total error minimization

no code implementations NeurIPS 2021 Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis

Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.

On the Impact of Device and Behavioral Heterogeneity in Federated Learning

no code implementations15 Feb 2021 Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, Marco Canini

Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.

Fairness Federated Learning

An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems

1 code implementation26 Jan 2021 Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alouini, Marco Canini

Many proposals exploit the compressibility of the gradients and propose lossy compression techniques to speed up the communication stage of distributed training.

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.