MSTCGAN: Multiscale time conditional generative adversarial network for long-term satellite image sequence prediction

Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial–temporal sequence models for the task. However, they suffer from either oversimplified model assumptions or blurry predictions and sequential error accumulation issue, for a long-term forecast requirement. In this article, we propose a novel multiscale time conditional generative adversarial network (MSTCGAN). To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input. In addition, a powerful multiscale generator is designed with the multihead axial attention, which helps to carefully preserve the fine-grained details for appearance consistency. Moreover, we develop a temporal discriminator to address the blurry issue and maintain the motion consistency in prediction. Extensive experiments have been conducted on the FengYun-4A satellite dataset, and the results demonstrate the effectiveness and superiority of the proposed method over state-of-the-art approaches.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here