This work investigates the problem of sketch-guided object localization (SGOL), where human sketches are used as queries to conduct the object localization in natural images.
Low resource Handwritten Text Recognition (HTR) is a hard problem due to the scarce annotated data and the very limited linguistic information (dictionaries and language models).
Scene text instances found in natural images carry explicit semantic information that can provide important cues to solve a wide array of computer vision problems.
Text contained in an image carries high-level semantics that can be exploited to achieve richer image understanding.
Ranked #1 on Fine-Grained Image Classification on Con-Text
Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic.
Embedding data into vector spaces is a very popular strategy of pattern recognition methods.
In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query.
Offline signature verification is one of the most challenging tasks in biometrics and document forensics.
Ranked #1 on Handwriting Verification on CEDAR Signature
Digital libraries store images which can be highly degraded and to index this kind of images we resort to word spot- ting as our information retrieval system.