Search Results for author: Cody Blakeney

Found 4 papers, 2 papers with code

Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation

no code implementations1 Nov 2022 Cody Blakeney, Jessica Zosa Forde, Jonathan Frankle, Ziliang Zong, Matthew L. Leavitt

We conducted a series of experiments to investigate whether and how distillation can be used to accelerate training using ResNet-50 trained on ImageNet and BERT trained on C4 with a masked language modeling objective and evaluated on GLUE, using common enterprise hardware (8x NVIDIA A100).

Image Classification Language Modelling +1

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

1 code implementation15 Jun 2021 Cody Blakeney, Nathaniel Huish, Yan Yan, Ziliang Zong

In recent years the ubiquitous deployment of AI has posed great concerns in regards to algorithmic bias, discrimination, and fairness.

Fairness Knowledge Distillation

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

1 code implementation5 Dec 2020 Cody Blakeney, Xiaomin Li, Yan Yan, Ziliang Zong

The experimental results running on an AMD server with four Geforce RTX 2080Ti GPUs show that our algorithm can achieve 3x speedup plus 19% energy savings on VGG distillation, and 3. 5x speedup plus 29% energy savings on ResNet distillation, both with negligible accuracy loss.

Knowledge Distillation Neural Network Compression +3

Cannot find the paper you are looking for? You can Submit a new open access paper.