Self-Supervised Learning

Visual Commonsense Region-based Convolutional Neural Network

Introduced by Wang et al. in Visual Commonsense R-CNN

VC R-CNN is an unsupervised feature representation learning method, which uses Region-based Convolutional Neural Network (R-CNN) as the visual backbone, and the causal intervention as the training objective. Given a set of detected object regions in an image (e.g., using Faster R-CNN), like any other unsupervised feature learning methods (e.g., word2vec), the proxy training objective of VC R-CNN is to predict the contextual objects of a region. However, they are fundamentally different: the prediction of VC R-CNN is by using causal intervention: P(Y|do(X)), while others are by using the conventional likelihood: P(Y|X). This is also the core reason why VC R-CNN can learn "sense-making" knowledge like chair can be sat -- while not just "common" co-occurrences such as the chair is likely to exist if table is observed.

Source: Visual Commonsense R-CNN

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Captioning 1 50.00%
Visual Question Answering (VQA) 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories