Transformers

AutoTinyBERT is a an efficient BERT variant found through neural architecture search. Specifically, one-shot learning is used to obtain a big Super Pretrained Language Model (SuperPLM), where the objectives of pre-training or task-agnostic BERT distillation are used. Then, given a specific latency constraint, an evolutionary algorithm is run on the SuperPLM to search optimal architectures. Finally, we extract the corresponding sub-models based on the optimal architectures and further train these models.

Source: AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Quantization 1 50.00%
One-Shot Learning 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories