Search Results for author: Yuris Mulya Saputra

Found 5 papers, 0 papers with code

Federated Learning Framework with Straggling Mitigation and Privacy-Awareness for AI-based Mobile Application Services

no code implementations17 Jun 2021 Yuris Mulya Saputra, Diep N. Nguyen, Dinh Thai Hoang, Quoc-Viet Pham, Eryk Dutkiewicz, Won-Joo Hwang

In this work, we propose a novel framework to address straggling and privacy issues for federated learning (FL)-based mobile application services, taking into account limited computing/communications resources at mobile users (MUs)/mobile application provider (MAP), privacy cost, the rationality and incentive competition among MUs in contributing data to the MAP.

Federated Learning

Dynamic Federated Learning-Based Economic Framework for Internet-of-Vehicles

no code implementations1 Jan 2021 Yuris Mulya Saputra, Dinh Thai Hoang, Diep N. Nguyen, Le-Nam Tran, Shimin Gong, Eryk Dutkiewicz

Federated learning (FL) can empower Internet-of-Vehicles (IoV) networks by leveraging smart vehicles (SVs) to participate in the learning process with minimum data exchanges and privacy disclosure.

Federated Learning

Federated Learning Meets Contract Theory: Energy-Efficient Framework for Electric Vehicle Networks

no code implementations4 Apr 2020 Yuris Mulya Saputra, Diep N. Nguyen, Dinh Thai Hoang, Thang Xuan Vu, Eryk Dutkiewicz, Symeon Chatzinotas

In this paper, we propose a novel energy-efficient framework for an electric vehicle (EV) network using a contract theoretic-based economic model to maximize the profits of charging stations (CSs) and improve the social welfare of the network.

Networking and Internet Architecture Signal Processing

Energy Demand Prediction with Federated Learning for Electric Vehicle Networks

no code implementations3 Sep 2019 Yuris Mulya Saputra, Dinh Thai Hoang, Diep N. Nguyen, Eryk Dutkiewicz, Markus Dominik Mueck, Srikathyayani Srikanteswara

Through experimental results, we show that our proposed approaches can improve the accuracy of energy demand prediction up to 24. 63% and decrease communication overhead by 83. 4% compared with other baseline machine learning algorithms.

BIG-bench Machine Learning Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.