Search Results for author: Wutao Lin

Found 4 papers, 1 papers with code

Reinforced Multi-Teacher Selection for Knowledge Distillation

no code implementations11 Dec 2020 Fei Yuan, Linjun Shou, Jian Pei, Wutao Lin, Ming Gong, Yan Fu, Daxin Jiang

When multiple teacher models are available in distillation, the state-of-the-art methods assign a fixed weight to a teacher model in the whole distillation.

Knowledge Distillation Model Compression

Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System

no code implementations18 Oct 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

The experiment results show that our method can significantly outperform the baseline methods and even achieve comparable results with the original teacher models, along with substantial speedup of model inference.

General Knowledge Knowledge Distillation +3

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System

no code implementations21 Apr 2019 Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang

Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have demonstrated excellent results in question answering areas.

Knowledge Distillation Model Compression +1

NeuronBlocks: Building Your NLP DNN Models Like Playing Lego

2 code implementations IJCNLP 2019 Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang

Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.