no code implementations • 3 Feb 2023 • Zihu Wang, Yu Wang, Hanbin Hu, Peng Li
Contrastive learning demonstrates great promise for representation learning.
2 code implementations • 8 Nov 2021 • Bicheng Ying, Kun Yuan, Hanbin Hu, Yiming Chen, Wotao Yin
On mainstream DNN training tasks, BlueFog reaches a much higher throughput and achieves an overall $1. 2\times \sim 1. 8\times$ speedup over Horovod, a state-of-the-art distributed deep learning package based on Ring-Allreduce.
2 code implementations • NeurIPS 2021 • Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, Wotao Yin
Experimental results on a variety of tasks and models demonstrate that decentralized (momentum) SGD over exponential graphs promises both fast and high-quality training.
no code implementations • 29 Sep 2021 • Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Yingya Zhang, Pan Pan, Wotao Yin
Decentralized adaptive gradient methods, in which each node averages only with its neighbors, are critical to save communication and wall-clock training time in deep learning tasks.
no code implementations • 19 Jun 2019 • Hanbin Hu, Mit Shah, Jianhua Z. Huang, Peng Li
It has been shown that deep neural networks (DNNs) may be vulnerable to adversarial attacks, raising the concern on their robustness particularly for safety-critical applications.