# Edge-computing

89 papers with code • 0 benchmarks • 0 datasets

Deep Learning on EDGE devices

## Benchmarks

These leaderboards are used to track progress in Edge-computing
## Libraries

Use these libraries to find Edge-computing models and implementations## Most implemented papers

# Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks

However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.

# Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach

Numerical results are illustrated to demonstrate that efficient policies can be learned at each user, and performance of the proposed DDPG based decentralized strategy outperforms the conventional deep Q-network (DQN) based discrete power control strategy and some other greedy strategies with reduced computation cost.

# On the Convergence of FedAvg on Non-IID Data

In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.

# Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems

Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.

# Scientific Image Restoration Anywhere

We explore this question by evaluating the performance and accuracy of a scientific image restoration model, for which both model input and output are images, on edge computing devices.

# Graph Markov Network for Traffic Forecasting with Missing Data

Although missing values can be imputed, existing data imputation methods normally need long-term historical traffic state data.

# Split Computing for Complex Object Detectors: Challenges and Preliminary Results

Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.

# Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems

In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.

# Systolic-CNN: An OpenCL-defined Scalable Run-time-flexible FPGA Accelerator Architecture for Accelerating Convolutional Neural Network Inference in Cloud/Edge Computing

Systolic-CNN is also run-time-flexible in the context of multi-tenancy cloud/edge computing, which can be time-shared to accelerate a variety of CNN models at run time without the need of recompiling the FPGA kernel hardware nor reprogramming the FPGA.

# DONE: Distributed Approximate Newton-type Method for Federated Edge Learning

In this work, we propose DONE, a distributed approximate Newton-type algorithm with fast convergence rate for communication-efficient federated edge learning.