Search Results for author: Chris Gilmer-Hill

Found 1 papers, 0 papers with code

Knowledge Distillation for Efficient Sequences of Training Runs

no code implementations11 Mar 2023 Xingyu Liu, Alex Leonardi, Lu Yu, Chris Gilmer-Hill, Matthew Leavitt, Jonathan Frankle

We find that augmenting future runs with KD from previous runs dramatically reduces the time necessary to train these models, even taking into account the overhead of KD.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.