Our method is composed of two networks: a localizer that yields segmentation mask, followed by a classifier.
The CNN is exploited to collect both positive and negative evidence at the pixel level to train the decoder.
Interpolation is required to restore full size CAMs, yet it does not consider the statistical properties of objects, such as color and texture, leading to activations with inconsistent boundaries, and inaccurate localizations.
Hence, our proposed student-teacher framework is trained to address the occlusion problem by matching the distributions of between- and within-class distances (DCDs) of occluded samples with that of holistic (non-occluded) samples, thereby using the latter as a soft labeled reference to learn well separated DCDs.
We propose novel regularization terms, which enable the model to seek both non-discriminative and discriminative regions, while discouraging unbalanced segmentations.
CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
Weakly supervised object localization is a challenging task in which the object of interest should be localized while learning its appearance.
We propose a new constrained-optimization formulation for deep ordinal classification, in which uni-modality of the label distribution is enforced implicitly via a set of inequality constraints over all the pairs of adjacent labels.
Ranked #1 on Historical Color Image Dating on HCI
Four key challenges are identified for the application of deep WSOL methods in histology -- under/over activation of CAMs, sensitivity to thresholding, and model selection.
Pointwise localization allows more precise localization and accurate interpretability, compared to bounding box, in applications where objects are highly unstructured such as in medical domain.
In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications.
In this work, we tackle the issue of training neural networks for classification task when few training samples are available.
The motivation of this work is to learn the output dependencies that may lie in the output data in order to improve the prediction accuracy.