Search Results for author: Jingwen Ye

Found 8 papers, 5 papers with code

Spot-adaptive Knowledge Distillation

1 code implementation5 May 2022 Jie Song, Ying Chen, Jingwen Ye, Mingli Song

Knowledge distillation (KD) has become a well established paradigm for compressing deep neural networks.

Knowledge Distillation

Safe Distillation Box

no code implementations5 Dec 2021 Jingwen Ye, Yining Mao, Jie Song, Xinchao Wang, Cheng Jin, Mingli Song

In other words, all users may employ a model in SDB for inference, but only authorized users get access to KD from the model.

Knowledge Distillation

Online Knowledge Distillation for Efficient Pose Estimation

no code implementations ICCV 2021 Zheng Li, Jingwen Ye, Mingli Song, Ying Huang, Zhigeng Pan

However, existing pose distillation works rely on a heavy pre-trained estimator to perform knowledge transfer and require a complex two-stage learning procedure.

Knowledge Distillation Pose Estimation +1

DEPARA: Deep Attribution Graph for Deep Knowledge Transferability

1 code implementation CVPR 2020 Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, Mingli Song

In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.

Model Selection Transfer Learning

Amalgamating Filtered Knowledge: Learning Task-customized Student from Multi-task Teachers

1 code implementation28 May 2019 Jingwen Ye, Xinchao Wang, Yixin Ji, Kairi Ou, Mingli Song

Many well-trained Convolutional Neural Network(CNN) models have now been released online by developers for the sake of effortless reproducing.

Neural Style Transfer: A Review

7 code implementations11 May 2017 Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, Mingli Song

We first propose a taxonomy of current algorithms in the field of NST.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.