Google street view and deep learning: a new ground truthing approach for crop mapping

3 Dec 2019  ·  Yulin Yan, Youngryel Ryu ·

Ground referencing is essential for supervised crop mapping. However, conventional ground truthing involves extensive field surveys and post processing, which is costly in terms of time and labor. In this study, we applied a convolutional neural network (CNN) model to explore the efficacy of automatic ground truthing via Google street view (GSV) images in two distinct farming regions: central Illinois and southern California. We demonstrated the feasibility and reliability of the new ground referencing technique further by performing pixel-based crop mapping with vegetation indices as the model input. The results were evaluated using the United States Department of Agriculture (USDA) crop data layer (CDL) products. From 8,514 GSV images, the CNN model screened out 2,645 target crop images. These images were well classified into crop types, including alfalfa, almond, corn, cotton, grape, soybean, and pistachio. The overall GSV image classification accuracy reached 93% in California and 97% in Illinois. We then shifted the image geographic coordinates using fixed empirical coefficients to produce 8,173 crop reference points including 1,764 in Illinois and 6,409 in California. Evaluation of these new reference points with CDL products showed satisfactory coherence, with 94 to 97% agreement. CNN-based mapping also captured the general pattern of crop type distributions. The overall differences between CDL products and our mapping results were 4% in California and 5% in Illinois. Thus, using these deep learning and GSV image techniques, we have provided an efficient and cost-effective alternative method for ground referencing and crop mapping.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here