DeepMove: Learning Place Representations through Large Scale Movement Data

11 Jul 2018  ·  Yang Zhou, Yan Huang ·

Understanding and reasoning about places and their relationships are critical for many applications. Places are traditionally curated by a small group of people as place gazetteers and are represented by an ID with spatial extent, category, and other descriptions. However, a place context is described to a large extent by movements made from/to other places. Places are linked and related to each other by these movements. This important context is missing from the traditional representation. We present DeepMove, a novel approach for learning latent representations of places. DeepMove advances the current deep learning based place representations by directly model movements between places. We demonstrate DeepMove's latent representations on place categorization and clustering tasks on large place and movement datasets with respect to important parameters. Our results show that DeepMove outperforms state-of-the-art baselines. DeepMove's representations can provide up to 15% higher than competing methods in matching rate of place category and result in up to 39% higher silhouette coefficient value for place clusters. DeepMove is spatial and temporal context aware. It is scalable. It outperforms competing models using much smaller training dataset (a month or 1/12 of data). These qualities make it suitable for a broad class of real-world applications.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here