In image inpainting task, the mechanism extracts complementary features from the word embedding in two paths by reciprocal attention, which is done by comparing the descriptive text and complementary image areas through reciprocal attention.
Source: Text-Guided Neural Image InpaintingPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Scene Understanding | 2 | 7.69% |
Fairness | 2 | 7.69% |
Object Detection | 2 | 7.69% |
Semantic Segmentation | 2 | 7.69% |
Deep Learning | 1 | 3.85% |
Management | 1 | 3.85% |
Graph Attention | 1 | 3.85% |
Zero-Shot Learning | 1 | 3.85% |
Decoder | 1 | 3.85% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |