Learning the semantic structure of objects from Web supervision

5 Jul 2016  ·  David Novotny, Diane Larlus, Andrea Vedaldi ·

While recent research in image understanding has often focused on recognizing more types of objects, understanding more about the objects is just as important. Recognizing object parts and attributes has been extensively studied before, yet learning large space of such concepts remains elusive due to the high cost of providing detailed object annotations for supervision. The key contribution of this paper is an algorithm to learn the nameable parts of objects automatically, from images obtained by querying Web search engines. The key challenge is the high level of noise in the annotations; to address it, we propose a new unified embedding space where the appearance and geometry of objects and their semantic parts are represented uniformly. Geometric relationships are induced in a soft manner by a rich set of nonsemantic mid-level anchors, bridging the gap between semantic and non-semantic parts. We also show that the resulting embedding provides a visually-intuitive mechanism to navigate the learned concepts and their corresponding images.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here