Network Dissection is an interpretability method for CNNs that evaluates the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors.
The measurement of interpretability proceeds in three steps:
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Adversarial Defense | 1 | 12.50% |
Classification | 1 | 12.50% |
General Classification | 1 | 12.50% |
Image Classification | 1 | 12.50% |
Adversarial Robustness | 1 | 12.50% |
Image Generation | 1 | 12.50% |
Image Retrieval | 1 | 12.50% |
Object Detection | 1 | 12.50% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |