no code implementations • 4 Apr 2024 • Naram Mhaisen, George Iosifidis
This paper brings the concept of "optimism" to the new and promising framework of online Non-stochastic Control (NSC).
no code implementations • 2 Oct 2023 • Naram Mhaisen, George Iosifidis
We tackle the problem of Non-stochastic Control (NSC) with the aim of obtaining algorithms whose policy regret is proportional to the difficulty of the controlled environment.
no code implementations • 15 Aug 2022 • Naram Mhaisen, Abhishek Sinha, Georgios Paschos, Georgios Iosifidis
We take a systematic look at the problem of storing whole files in a cache with limited capacity in the context of optimistic learning, where the caching policy has access to a prediction oracle (provided by, e. g., a Neural Network).
no code implementations • 20 Apr 2022 • Naram Mhaisen, George Iosifidis, Douglas Leith
We build upon the Follow-the-Regularized-Leader (FTRL) framework, which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with pre-reserved or dynamic storage subject to time-average budget constraints.
1 code implementation • 22 Feb 2022 • Naram Mhaisen, George Iosifidis, Douglas Leith
The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas.
1 code implementation • 15 Feb 2022 • Jingjing Zheng, Kai Li, Naram Mhaisen, Wei Ni, Eduardo Tovar, Mohsen Guizani
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT).
no code implementations • 5 Aug 2021 • Alaa Awad Abdellatif, Naram Mhaisen, Zina Chkirbene, Amr Mohamed, Aiman Erbad, Mohsen Guizani
After that, we provide a deep literature review for the applications of RL in I-health systems.
1 code implementation • 14 Jul 2021 • Alaa Awad Abdellatif, Naram Mhaisen, Amr Mohamed, Aiman Erbad, Mohsen Guizani, Zaher Dawy, Wassim Nasreddine
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model, without the need to share their local data.
no code implementations • 4 May 2021 • Emna Baccour, Naram Mhaisen, Alaa Awad Abdellatif, Aiman Erbad, Amr Mohamed, Mounir Hamdi, Mohsen Guizani
The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges.
no code implementations • 10 Dec 2020 • Naram Mhaisen, Alaa Awad, Amr Mohamed, Aiman Erbad, Mohsen Guizani
Distributed learning algorithms aim to leverage distributed and diverse data stored at users' devices to learn a global phenomena by performing training amongst participating devices and periodically aggregating their local models' parameters into a global model.