no code implementations • 1 Nov 2022 • Cody Blakeney, Jessica Zosa Forde, Jonathan Frankle, Ziliang Zong, Matthew L. Leavitt
We conducted a series of experiments to investigate whether and how distillation can be used to accelerate training using ResNet-50 trained on ImageNet and BERT trained on C4 with a masked language modeling objective and evaluated on GLUE, using common enterprise hardware (8x NVIDIA A100).
no code implementations • 8 Oct 2021 • Cody Blakeney, Gentry Atkinson, Nathaniel Huish, Yan Yan, Vangelis Metris, Ziliang Zong
Algorithmic bias is of increasing concern, both to the research community, and society at large.
1 code implementation • 15 Jun 2021 • Cody Blakeney, Nathaniel Huish, Yan Yan, Ziliang Zong
In recent years the ubiquitous deployment of AI has posed great concerns in regards to algorithmic bias, discrimination, and fairness.
1 code implementation • 5 Dec 2020 • Cody Blakeney, Xiaomin Li, Yan Yan, Ziliang Zong
The experimental results running on an AMD server with four Geforce RTX 2080Ti GPUs show that our algorithm can achieve 3x speedup plus 19% energy savings on VGG distillation, and 3. 5x speedup plus 29% energy savings on ResNet distillation, both with negligible accuracy loss.