# Edge-computing

54 papers with code • 1 benchmarks • 1 datasets

Deep Learning on EDGE devices

# Greatest papers with code

# Spectral Pruning for Recurrent Neural Networks

Pruning techniques for neural networks with a recurrent architecture, such as the recurrent neural network (RNN), are strongly desired for their application to edge-computing devices.

# DRLE: Decentralized Reinforcement Learning at the Edge for Traffic Light Control in the IoV

To this end, we propose a Decentralized Reinforcement Learning at the Edge for traffic light control in the IoV (DRLE).

Ranked #1 on Edge-computing on 0

# Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems

In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.

# Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks

However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.

# Split Computing for Complex Object Detectors: Challenges and Preliminary Results

Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.

# Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems

Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.

# Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach

Numerical results are illustrated to demonstrate that efficient policies can be learned at each user, and performance of the proposed DDPG based decentralized strategy outperforms the conventional deep Q-network (DQN) based discrete power control strategy and some other greedy strategies with reduced computation cost.

# On the Convergence of FedAvg on Non-IID Data

In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.

# DeepShift: Towards Multiplication-Less Neural Networks

This family of neural network architectures (that use convolutional shifts and fully connected shifts) is referred to as DeepShift models.

# Adaptive Federated Learning in Resource Constrained Edge Computing Systems

Our focus is on a generic class of machine learning models that are trained using gradient-descent based approaches.