Interpretability

Class activation maps could be used to interpret the prediction decision made by the convolutional neural network (CNN).

Image source: Learning Deep Features for Discriminative Localization

Source: Is Object Localization for Free? - Weakly-Supervised Learning With Convolutional Neural Networks

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories