Search Results for author: Xiaoge Deng

Found 3 papers, 0 papers with code

Accelerating Federated Learning by Selecting Beneficial Herd of Local Gradients

no code implementations25 Mar 2024 Ping Luo, Xiaoge Deng, Ziqing Wen, Tao Sun, Dongsheng Li

Federated Learning (FL) is a distributed machine learning framework in communication network systems.

Federated Learning

Towards Understanding the Generalizability of Delayed Stochastic Gradient Descent

no code implementations18 Aug 2023 Xiaoge Deng, Li Shen, Shengwei Li, Tao Sun, Dongsheng Li, DaCheng Tao

Stochastic gradient descent (SGD) performed in an asynchronous manner plays a crucial role in training large-scale machine learning models.

S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning

no code implementations5 Oct 2021 Keshi Ge, Yongquan Fu, Zhiquan Lai, Xiaoge Deng, Dongsheng Li

Distributed stochastic gradient descent (SGD) approach has been widely used in large-scale deep learning, and the gradient collective method is vital to ensure the training scalability of the distributed deep learning system.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.