Teacher-Tutor-Student Knowledge Distillation is a method for image virtual try-on models. It treats fake images produced by the parser-based method as "tutor knowledge", where the artifacts can be corrected by real "teacher knowledge", which is extracted from the real person images in a self-supervised way. Other than using real images as supervisions, knowledge distillation is formulated in the try-on problem as distilling the appearance flows between the person image and the garment image, enabling the finding of dense correspondences between them to produce high-quality results.
Source: Parser-Free Virtual Try-on via Distilling Appearance FlowsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
ECG based Sleep Staging | 1 | 6.67% |
EEG | 1 | 6.67% |
EEG based sleep staging | 1 | 6.67% |
Electroencephalogram (EEG) | 1 | 6.67% |
Sleep Stage Detection | 1 | 6.67% |
Sleep Staging | 1 | 6.67% |
W-R-L-D Sleep Staging | 1 | 6.67% |
W-R-N Sleep Staging | 1 | 6.67% |
Audio Tagging | 1 | 6.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |