In this paper, a new deep hashing method is proposed for multi-label image retrieval by re-defining the pairwise similarity into an instance similarity, where the instance similarity is quantified into a percentage based on the normalized semantic labels.
In particular, we are interested in two application scenarios: i) cross-modal retrieval between panchromatic (PAN) and multi-spectral imagery, and ii) multi-label image retrieval between very high resolution (VHR) images and speech based label annotations.
CROSS-MODAL INFORMATION RETRIEVAL CROSS-MODAL RETRIEVAL INFORMATION RETRIEVAL MULTI-LABEL IMAGE RETRIEVAL