A Novel Technique for Evidence based Conditional Inference in Deep Neural Networks via Latent Feature Perturbation

24 Nov 2018  ·  Dinesh Khandelwal, Suyash Agrawal, Parag Singla, Chetan Arora ·

Auxiliary information can be exploited in machine learning models using the paradigm of evidence based conditional inference. Multi-modal techniques in Deep Neural Networks (DNNs) can be seen as perturbing the latent feature representation for incorporating evidence from the auxiliary modality. However, they require training a specialized network which can map sparse evidence to a high dimensional latent space vector. Designing such a network, as well as collecting jointly labeled data for training is a non-trivial task. In this paper, we present a novel multi-task learning (MTL) based framework to perform evidence based conditional inference in DNNs which can overcome both these shortcomings. Our framework incorporates evidence as the output of secondary task(s), while modeling the original problem as the primary task of interest. During inference, we employ a novel Bayesian formulation to change the joint latent feature representation so as to maximize the probability of the observed evidence. Since our approach models evidence as prediction from a DNN, this can often be achieved using standard pre-trained backbones for popular tasks, eliminating the need for training altogether. Even when training is required, our MTL architecture ensures the same can be done without any need for jointly labeled data. Exploiting evidence using our framework, we show an improvement of 3.9% over the state-of-the-art, for predicting semantic segmentation given the image tags, and 2.8% for predicting instance segmentation given image captions.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here