Search Results for author: Jingwei Chen

Found 3 papers, 3 papers with code

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

1 code implementation ICCV 2019 Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma

Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself.

Knowledge Distillation

Front-to-End Bidirectional Heuristic Search with Near-Optimal Node Expansions

1 code implementation10 Mar 2017 Jingwei Chen, Robert C. Holte, Sandra Zilles, Nathan R. Sturtevant

pairs, and present a new admissible front-to-end bidirectional heuristic search algorithm, Near-Optimal Bidirectional Search (NBS), that is guaranteed to do no more than 2VC expansions.

Cannot find the paper you are looking for? You can Submit a new open access paper.