Network-Aware Optimization of Distributed Learning for Fog Computing

17 Apr 2020  ·  Su Wang, Yichen Ruan, Yuwei Tu, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong ·

Fog computing promises to enable machine learning tasks to scale to large amounts of data by distributing processing across connected devices. Two key challenges to achieving this goal are heterogeneity in devices compute resources and topology constraints on which devices can communicate with each other. We address these challenges by developing the first network-aware distributed learning optimization methodology where devices optimally share local data processing and send their learnt parameters to a server for aggregation at certain time intervals. Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points. We analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of offloading is approximately linear in the range of computing costs in the network. Our subsequent experiments on testbed datasets we collect confirm that our algorithms are able to improve network resource utilization substantially without sacrificing the accuracy of the learned model. In these experiments, we also study the effect of network dynamics, quantifying the impact of nodes entering or exiting the network on model learning and resource costs.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Distributed, Parallel, and Cluster Computing

Datasets


  Add Datasets introduced or used in this paper