Learning a Dynamic Map of Visual Appearance

CVPR 2020  ·  Tawfiq Salem, Scott Workman, Nathan Jacobs ·

The appearance of the world varies dramatically not only from place to place but also from hour to hour and month to month. Every day billions of images capture this complex relationship, many of which are associated with precise time and location metadata. We propose to use these images to construct a global-scale, dynamic map of visual appearance attributes. Such a map enables fine-grained understanding of the expected appearance at any geographic location and time. Our approach integrates dense overhead imagery with location and time metadata into a general framework capable of mapping a wide variety of visual attributes. A key feature of our approach is that it requires no manual data annotation. We demonstrate how this approach can support various applications, including image-driven mapping, image geolocalization, and metadata verification.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Introduced in the Paper:

Cross-View Time Dataset

Used in the Paper:

Places

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here