Search Results for author: Logan Lawrence

Found 1 papers, 0 papers with code

Efficient Transformer Knowledge Distillation: A Performance Review

no code implementations22 Nov 2023 Nathan Brown, Ashton Williamson, Tahj Anderson, Logan Lawrence

In this work, we provide an evaluation of model compression via knowledge distillation on efficient attention transformers.

Knowledge Distillation Model Compression +4

Cannot find the paper you are looking for? You can Submit a new open access paper.