Search Results for author: Naram Mhaisen

Found 10 papers, 3 papers with code

Optimistic Online Non-stochastic Control via FTRL

no code implementations4 Apr 2024 Naram Mhaisen, George Iosifidis

This paper brings the concept of "optimism" to the new and promising framework of online Non-stochastic Control (NSC).

Adaptive Online Non-stochastic Control

no code implementations2 Oct 2023 Naram Mhaisen, George Iosifidis

We tackle the problem of Non-stochastic Control (NSC) with the aim of obtaining algorithms whose policy regret is proportional to the difficulty of the controlled environment.

Optimistic No-regret Algorithms for Discrete Caching

no code implementations15 Aug 2022 Naram Mhaisen, Abhishek Sinha, Georgios Paschos, Georgios Iosifidis

We take a systematic look at the problem of storing whole files in a cache with limited capacity in the context of optimistic learning, where the caching policy has access to a prediction oracle (provided by, e. g., a Neural Network).

Online Caching with no Regret: Optimistic Learning via Recommendations

no code implementations20 Apr 2022 Naram Mhaisen, George Iosifidis, Douglas Leith

We build upon the Follow-the-Regularized-Leader (FTRL) framework, which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with pre-reserved or dynamic storage subject to time-average budget constraints.

Edge-computing

Online Caching with Optimistic Learning

1 code implementation22 Feb 2022 Naram Mhaisen, George Iosifidis, Douglas Leith

The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas.

Edge-computing

Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in Privacy-Persevering EdgeIoT

1 code implementation15 Feb 2022 Jingjing Zheng, Kai Li, Naram Mhaisen, Wei Ni, Eduardo Tovar, Mohsen Guizani

Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT).

Edge-computing Federated Learning

Communication-Efficient Hierarchical Federated Learning for IoT Heterogeneous Systems with Imbalanced Data

1 code implementation14 Jul 2021 Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad, Mohsen Guizani, Zaher Dawy, Wassim Nasreddine

Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data.

Federated Learning

Pervasive AI for IoT applications: A Survey on Resource-efficient Distributed Artificial Intelligence

no code implementations4 May 2021 Emna Baccour, Naram Mhaisen, Alaa Awad Abdellatif, Aiman Erbad, Amr Mohamed, Mounir Hamdi, Mohsen Guizani

The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges.

Recommendation Systems Scheduling

Analysis and Optimal Edge Assignment For Hierarchical Federated Learning on Non-IID Data

no code implementations10 Dec 2020 Naram Mhaisen, Alaa Awad, Amr Mohamed, Aiman Erbad, Mohsen Guizani

Distributed learning algorithms aim to leverage distributed and diverse data stored at users' devices to learn a global phenomena by performing training amongst participating devices and periodically aggregating their local models' parameters into a global model.

Edge-computing Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.