Spatio-Temporal-Decoupled Masked Pre-training: Benchmarked on Traffic Forecasting

1 Dec 2023  ·  Haotian Gao, Renhe Jiang, Zheng Dong, Jinliang Deng, Yuxin Ma, Xuan Song ·

Accurate forecasting of multivariate traffic flow time series remains challenging due to substantial spatio-temporal heterogeneity and complex long-range correlative patterns. To address this, we propose Spatio-Temporal-Decoupled Masked Pre-training (STD-MAE), a novel framework that employs masked autoencoders to learn and encode complex spatio-temporal dependencies via pre-training. Specifically, we use two decoupled masked autoencoders to reconstruct the traffic data along spatial and temporal axes using a self-supervised pre-training approach. These mask reconstruction mechanisms capture the long-range correlations in space and time separately. The learned hidden representations are then used to augment the downstream spatio-temporal traffic predictor. A series of quantitative and qualitative evaluations on four widely-used traffic benchmarks (PEMS03, PEMS04, PEMS07, and PEMS08) are conducted to verify the state-of-the-art performance, with STD-MAE explicitly enhancing the downstream spatio-temporal models' ability to capture long-range intricate spatial and temporal patterns. Codes are available at

PDF Abstract

Results from the Paper

 Ranked #1 on Traffic Prediction on PEMS-BAY (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Traffic Prediction PeMS04 STD-MAE 12 Steps MAE 17.80 # 1
Traffic Prediction PeMS07 STD-MAE MAE@1h 18.65 # 1
Traffic Prediction PEMS-BAY STD-MAE MAE @ 12 step 1.77 # 1
RMSE 4.20 # 1
Traffic Prediction PeMSD3 STD-MAE 12 steps MAE 13.80 # 1
12 steps RMSE 24.43 # 4
12 steps MAPE 13.96 # 4
Traffic Prediction PeMSD4 STD-MAE 12 steps MAE 17.80 # 1
Traffic Prediction PeMSD8 STD-MAE 12 steps MAE 13.44 # 2
12 steps RMSE 22.47 # 3
12 steps MAPE 8.76 # 3


No methods listed for this paper. Add relevant methods here