Semantic-Aware Scene Recognition

Scene recognition is currently one of the top-challenging research fields in computer vision. This may be due to the ambiguity between classes: images of several scene classes may share similar objects, which causes confusion among them. The problem is aggravated when images of a particular scene class are notably different. Convolutional Neural Networks (CNNs) have significantly boosted performance in scene recognition, albeit it is still far below from other recognition tasks (e.g., object or image recognition). In this paper, we describe a novel approach for scene recognition based on an end-to-end multi-modal CNN that combines image and context information by means of an attention module. Context information, in the shape of semantic segmentation, is used to gate features extracted from the RGB image by leveraging on information encoded in the semantic representation: the set of scene objects and stuff, and their relative locations. This gating process reinforces the learning of indicative scene content and enhances scene disambiguation by refocusing the receptive fields of the CNN towards them. Experimental results on four publicly available datasets show that the proposed approach outperforms every other state-of-the-art method while significantly reducing the number of network parameters. All the code and data used along this paper is available at

PDF Abstract


Introduced in the Paper:


Used in the Paper:

ADE20K SUN397 MIT Indoor Scenes

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Recognition ADE20K Semantic-Aware Scene Recogniton (ResNet-18) Top 1 Accuracy 62.55 # 1
Scene Recognition MIT Indoor Scenes Semantic-Aware Scene Recognition (ResNet-50) Accuracy 87.10 # 2
Scene Recognition Places365 Semantic-Aware Scene Recognition (ResNet-18) Top 1 Accuracy 56.51 # 2
Top 5 Accuracy 86.00 # 2
Scene Recognition SUN397 Semantic-Aware Scene Recognition (ResNet-50) Accuracy 74.04 # 2


No methods listed for this paper. Add relevant methods here