Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression

14 Oct 2020  ·  Artem Chumachenko, Daniil Gavrilov, Nikita Balagansky, Pavel Kalaidin ·

In this work, we present a novel approach for simultaneous knowledge transfer and model compression called Weight Squeezing. With this method, we perform knowledge transfer from a teacher model by learning the mapping from its weights to smaller student model weights. We applied Weight Squeezing to a pre-trained text classification model based on BERT-Medium model and compared our method to various other knowledge transfer and model compression methods on GLUE multitask benchmark. We observed that our approach produces better results while being significantly faster than other methods for training student models. We also proposed a variant of Weight Squeezing called Gated Weight Squeezing, for which we combined fine-tuning of BERT-Medium model and learning mapping from BERT-Base weights. We showed that fine-tuning with Gated Weight Squeezing outperforms plain fine-tuning of BERT-Medium model as well as other concurrent SoTA approaches while much being easier to implement.

PDF Abstract


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here