GLinear
1 papers with code • 19 benchmarks • 7 datasets
It is a simpler model; it is not made up of any complex components, functions or blocks (like self-attention schemes, positional encoding blocks, etc.). It integrates two components: (1) a non-linear GeLU-based transformation layer to capture intricate patterns, and (2) Reversible Instance Normalization (RevIN). 1. Due to its simple architecture, training of this model is very fast as compared to other transformer based predictors. 2. This proposed model provides comparable performance to other state-of-the-art predictors.
Most implemented papers
Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction
A performance comparison with state-of-the-art linear architectures (such as NLinear, DLinear, and RLinear) and transformer-based time series predictor (Autoformer) shows that the GLinear, despite being parametrically efficient, significantly outperforms the existing architectures in most cases of multivariate TSF.