Search Results for author: Haixu Wu

Found 17 papers, 12 papers with code

TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables

no code implementations29 Feb 2024 Yuxuan Wang, Haixu Wu, Jiaxiang Dong, Yong liu, Yunzhong Qiu, Haoran Zhang, Jianmin Wang, Mingsheng Long

Experimentally, TimeXer significantly improves time series forecasting with exogenous variables and achieves consistent state-of-the-art performance in twelve real-world forecasting benchmarks.

Time Series Time Series Forecasting

TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling

no code implementations4 Feb 2024 Jiaxiang Dong, Haixu Wu, Yuxuan Wang, Yunzhong Qiu, Li Zhang, Jianmin Wang, Mingsheng Long

To emphasize temporal correlation modeling, this paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.

Contrastive Learning Data Augmentation +1

Transolver: A Fast Transformer Solver for PDEs on General Geometries

no code implementations4 Feb 2024 Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, Mingsheng Long

Transformers have empowered many milestones across various fields and have recently been applied to solve partial differential equations (PDEs).

EuLagNet: Eulerian Fluid Prediction with Lagrangian Dynamics

no code implementations4 Feb 2024 Qilong Ma, Haixu Wu, Lanxiang Xing, Jianmin Wang, Mingsheng Long

Accurately predicting the future fluid is important to extensive areas, such as meteorology, oceanology and aerodynamics.

Future prediction

HelmFluid: Learning Helmholtz Dynamics for Interpretable Fluid Prediction

no code implementations16 Oct 2023 Lanxiang Xing, Haixu Wu, Yuezhou Ma, Jianmin Wang, Mingsheng Long

Compared with previous velocity estimating methods, HelmFluid is faithfully derived from Helmholtz theorem and ravels out complex fluid dynamics with physically interpretable evidence.

Future prediction

iTransformer: Inverted Transformers Are Effective for Time Series Forecasting

4 code implementations10 Oct 2023 Yong liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, Mingsheng Long

These forecasters leverage Transformers to model the global dependencies over temporal tokens of time series, with each token formed by multiple variates of the same timestamp.

Time Series Time Series Forecasting

SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling

1 code implementation NeurIPS 2023 Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, Mingsheng Long

By relating masked modeling to manifold learning, SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series.

Representation Learning Time Series +1

Solving High-Dimensional PDEs with Latent Spectral Models

1 code implementation30 Jan 2023 Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, Mingsheng Long

A burgeoning paradigm is learning neural operators to approximate the input-output mappings of PDEs.

Vocal Bursts Intensity Prediction

TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis

3 code implementations5 Oct 2022 Haixu Wu, Tengge Hu, Yong liu, Hang Zhou, Jianmin Wang, Mingsheng Long

TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block.

Action Recognition Anomaly Detection +4

Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting

1 code implementation28 May 2022 Yong liu, Haixu Wu, Jianmin Wang, Mingsheng Long

However, their performance can degenerate terribly on non-stationary real-world data in which the joint distribution changes over time.

Time Series Time Series Forecasting

Supported Policy Optimization for Offline Reinforcement Learning

3 code implementations13 Feb 2022 Jialong Wu, Haixu Wu, Zihan Qiu, Jianmin Wang, Mingsheng Long

Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy.

Offline RL reinforcement-learning +1

Flowformer: Linearizing Transformers with Conservation Flows

1 code implementation13 Feb 2022 Haixu Wu, Jialong Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long

By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions without using specific inductive biases.

Ranked #4 on D4RL on D4RL

D4RL Offline RL +2

ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning

1 code implementation8 Oct 2021 Zhiyu Yao, Yunbo Wang, Haixu Wu, Jianmin Wang, Mingsheng Long

To this end, we propose ModeRNN, which introduces a novel method to learn structured hidden representations between recurrent states.

Inductive Bias

Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

3 code implementations ICLR 2022 Jiehui Xu, Haixu Wu, Jianmin Wang, Mingsheng Long

Unsupervised detection of anomaly points in time series is a challenging problem, which requires the model to derive a distinguishable criterion.

Anomaly Detection Time Series +1

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning

3 code implementations17 Mar 2021 Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long

This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Video Prediction Weather Forecasting

MotionRNN: A Flexible Model for Video Prediction with Spacetime-Varying Motions

1 code implementation CVPR 2021 Haixu Wu, Zhiyu Yao, Jianmin Wang, Mingsheng Long

With high flexibility, this framework can adapt to a series of models for deterministic spatiotemporal prediction.

Video Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.