Search Results for author: Lu Geng

Found 3 papers, 1 papers with code

Multi-user Co-inference with Batch Processing Capable Edge Server

no code implementations3 Jun 2022 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng

To deal with the coupled offloading and scheduling introduced by concurrent batch processing, we first consider an offline problem with a constant edge inference latency and the same latency constraint.

Scheduling

Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning

no code implementations14 Jul 2020 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng

Then, a greedy device scheduling algorithm is introduced, which in each step selects the device consuming the least updating time obtained by the optimal bandwidth allocation, until the lower bound begins to increase, meaning that scheduling more devices will degrade the model accuracy.

Federated Learning Scheduling

Improving Device-Edge Cooperative Inference of Deep Learning via 2-Step Pruning

1 code implementation8 Mar 2019 Wenqi Shi, Yunzhong Hou, Sheng Zhou, Zhisheng Niu, Yang Zhang, Lu Geng

Since the output data size of a DNN layer can be larger than that of the raw data, offloading intermediate data between layers can suffer from high transmission latency under limited wireless bandwidth.

Cannot find the paper you are looking for? You can Submit a new open access paper.