no code implementations • CVPR 2022 • An Xu, Wenqi Li, Pengfei Guo, Dong Yang, Holger Roth, Ali Hatamizadeh, Can Zhao, Daguang Xu, Heng Huang, Ziyue Xu
In this work, we propose a novel training framework FedSM to avoid the client drift issue and successfully close the generalization gap compared with the centralized training for medical image segmentation tasks for the first time.
no code implementations • 12 Mar 2022 • Pengfei Guo, Dong Yang, Ali Hatamizadeh, An Xu, Ziyue Xu, Wenqi Li, Can Zhao, Daguang Xu, Stephanie Harmon, Evrim Turkbey, Baris Turkbey, Bradford Wood, Francesca Patella, Elvira Stellato, Gianpaolo Carrafiello, Vishal M. Patel, Holger R. Roth
Federated learning (FL) is a distributed machine learning technique that enables collaborative model training while avoiding explicit data sharing.
no code implementations • 8 Feb 2021 • An Xu, Heng Huang
In this work, we propose a new method to improve the training performance in cross-silo FL via maintaining double momentum buffers.
no code implementations • 1 Jan 2021 • An Xu, Xiao Yan, Hongchang Gao, Heng Huang
The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers.
no code implementations • 14 Aug 2020 • Bin Gu, An Xu, Zhouyuan Huo, Cheng Deng, Heng Huang
To the best of our knowledge, AFSGD-VP and its SVRG and SAGA variants are the first asynchronous federated learning algorithms for vertically partitioned data.
no code implementations • 13 Aug 2020 • An Xu, Zhouyuan Huo, Heng Huang
Both our theoretical and empirical results show that our new methods can handle the "gradient mismatch" problem.
no code implementations • 11 Apr 2020 • An Xu, Heng Huang
To tackle this important issue, we improve the communication-efficient distributed SGD from a novel aspect, that is, the trade-off between the variance and second moment of the gradient.
no code implementations • 25 Feb 2020 • An Xu, Zhouyuan Huo, Heng Huang
The communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
no code implementations • CVPR 2020 • An Xu, Zhouyuan Huo, Heng Huang
Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices.