Contextual Interference Reduction by Selective Fine-Tuning of Neural Networks

21 Nov 2020  ·  Mahdi Biparva, John Tsotsos ·

Feature disentanglement of the foreground target objects and the background surrounding context has not been yet fully accomplished. The lack of network interpretability prevents advancing for feature disentanglement and better generalization robustness. We study the role of the context on interfering with a disentangled foreground target object representation in this work. We hypothesize that the representation of the surrounding context is heavily tied with the foreground object due to the dense hierarchical parametrization of convolutional networks with under-constrained learning algorithms. Working on a framework that benefits from the bottom-up and top-down processing paradigms, we investigate a systematic approach to shift learned representations in feedforward networks from the emphasis on the irrelevant context to the foreground objects. The top-down processing provides importance maps as the means of the network internal self-interpretation that will guide the learning algorithm to focus on the relevant foreground regions towards achieving a more robust representations. We define an experimental evaluation setup with the role of context emphasized using the MNIST dataset. The experimental results reveal not only that the label prediction accuracy is improved but also a higher degree of robustness to the background perturbation using various noise generation methods is obtained.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods