Free-hand sketch-based image retrieval (SBIR) is a specific cross-view retrieval task, in which queries are abstract and ambiguous sketches while the retrieval database is formed with natural images.
We propose SketchParse, the first deep-network architecture for fully automatic parsing of freehand object sketches.
Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space.
Historical watermark recognition is a highly practical, yet unsolved challenge for archivists and historians.
The generative model learns a mapping that the distributions of sketches can be indistinguishable from the distribution of natural images using an adversarial loss, and simultaneously learns an inverse mapping based on the cycle consistency loss in order to enhance the indistinguishability.
In this paper, we propose a new benchmark for zero-shot SBIR where the model is evaluated in novel classes that are not seen during training.
Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic.