Search Results for author: An Xu

Found 9 papers, 0 papers with code

Closing the Generalization Gap of Cross-silo Federated Medical Image Segmentation

no code implementations CVPR 2022 An Xu, Wenqi Li, Pengfei Guo, Dong Yang, Holger Roth, Ali Hatamizadeh, Can Zhao, Daguang Xu, Heng Huang, Ziyue Xu

In this work, we propose a novel training framework FedSM to avoid the client drift issue and successfully close the generalization gap compared with the centralized training for medical image segmentation tasks for the first time.

Federated Learning Image Segmentation +3

Coordinating Momenta for Cross-silo Federated Learning

no code implementations8 Feb 2021 An Xu, Heng Huang

In this work, we propose a new method to improve the training performance in cross-silo FL via maintaining double momentum buffers.

Federated Learning

Delay-Tolerant Local SGD for Efficient Distributed Training

no code implementations1 Jan 2021 An Xu, Xiao Yan, Hongchang Gao, Heng Huang

The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers.

Federated Learning

Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning

no code implementations14 Aug 2020 Bin Gu, An Xu, Zhouyuan Huo, Cheng Deng, Heng Huang

To the best of our knowledge, AFSGD-VP and its SVRG and SAGA variants are the first asynchronous federated learning algorithms for vertically partitioned data.

Federated Learning Privacy Preserving

Step-Ahead Error Feedback for Distributed Training with Compressed Gradient

no code implementations13 Aug 2020 An Xu, Zhouyuan Huo, Heng Huang

Both our theoretical and empirical results show that our new methods can handle the "gradient mismatch" problem.

Detached Error Feedback for Distributed SGD with Random Sparsification

no code implementations11 Apr 2020 An Xu, Heng Huang

To tackle this important issue, we improve the communication-efficient distributed SGD from a novel aspect, that is, the trade-off between the variance and second moment of the gradient.

Generalization Bounds Image Classification +1

Optimal Gradient Quantization Condition for Communication-Efficient Distributed Training

no code implementations25 Feb 2020 An Xu, Zhouyuan Huo, Heng Huang

The communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.

Quantization

On the Acceleration of Deep Learning Model Parallelism with Staleness

no code implementations CVPR 2020 An Xu, Zhouyuan Huo, Heng Huang

Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.