Knowledge Distillation

Online Multi-granularity Distillation

Introduced by Ren et al. in Online Multi-Granularity Distillation for GAN Compression

OMGD, or Online Multi-Granularity Distillation is a framework for learning efficient GANs. The student generator is optimized in a discriminator-free and ground-truth-free setting. The scheme trains the teacher and student alternatively, promoting these two generators iteratively and progressively. The progressively optimized teacher generator helps to warm up the student and guide the optimization direction step by step.

Specifically, the student generator $G_{S}$ only leverages the complementary teacher generators $G^{W}_{T}$ and $G^{D}_{T}$ for optimization and can be trained in the discriminator-free and ground-truth-free setting. This framework transfers different levels concepts from the intermediate layers and output layer to perform the knowledge distillation. The whole optimization is conducted on an online distillation scheme. Namely, $G^{W}_{T}$, $G^{D}_{T}$ and $G_{S}$ are optimized simultaneously and progressively.

Source: Online Multi-Granularity Distillation for GAN Compression

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Translation 1 100.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories