Search Results for author: Yangzhu Wang

Found 5 papers, 5 papers with code

Take an Irregular Route: Enhance the Decoder of Time-Series Forecasting Transformer

1 code implementation10 Dec 2023 Li Shen, Yuning Wei, Yangzhu Wang, Hongguang Li

With the development of Internet of Things (IoT) systems, precise long-term forecasting method is requisite for decision makers to evaluate current statuses and formulate future policies.

Decoder Time Series +1

GBT: Two-stage transformer framework for non-stationary time series forecasting

1 code implementation17 Jul 2023 Li Shen, Yuning Wei, Yangzhu Wang

It decouples the prediction process of TSFT into two stages, including Auto-Regression stage and Self-Regression stage to tackle the problem of different statistical properties between input and prediction sequences. Prediction results of Auto-Regression stage serve as a Good Beginning, i. e., a better initialization for inputs of Self-Regression stage.

Decoder regression +2

FDNet: Focal Decomposed Network for Efficient, Robust and Practical Time Series Forecasting

1 code implementation19 Jun 2023 Li Shen, Yuning Wei, Yangzhu Wang, Huaxin Qiu

Moreover, we propose focal input sequence decomposition method which decomposes input sequence in a focal manner for efficient and robust forecasting when facing Long Sequence Time series Input (LSTI) problem.

Inductive Bias Time Series +1

Respecting Time Series Properties Makes Deep Time Series Forecasting Perfect

1 code implementation22 Jul 2022 Li Shen, Yuning Wei, Yangzhu Wang

Thanks to the core idea of respecting time series properties, no matter in which forecasting format, RTNet shows obviously superior forecasting performances compared with dozens of other SOTA time series forecasting baselines in three real-world benchmark datasets.

Time Series Time Series Forecasting

TCCT: Tightly-Coupled Convolutional Transformer on Time Series Forecasting

2 code implementations29 Aug 2021 Li Shen, Yangzhu Wang

To address this issue, we propose the concept of tightly-coupled convolutional Transformer(TCCT) and three TCCT architectures which apply transformed CNN architectures into Transformer: (1) CSPAttention: through fusing CSPNet with self-attention mechanism, the computation cost of self-attention mechanism is reduced by 30% and the memory usage is reduced by 50% while achieving equivalent or beyond prediction accuracy.

Time Series Time Series Forecasting

Cannot find the paper you are looking for? You can Submit a new open access paper.