Search Results for author: Celine Liang

Found 1 papers, 0 papers with code

Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings

no code implementations WS 2019 Luke Melas-Kyriazi, George Han, Celine Liang

Recent research points to knowledge distillation as a potential solution, showing that when training data for a given task is abundant, it is possible to distill a large (teacher) LM into a small task-specific (student) network with minimal loss of performance.

General Classification Knowledge Distillation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.