1 code implementation • 8 Jan 2024 • Pengxin Guo, Pengrong Jin, Ziyue Li, Lei Bai, Yu Zhang
To make the model trained on historical data better adapt to future data in a fully online manner, this paper conducts the first study of the online test-time adaptation techniques for spatial-temporal traffic flow forecasting problems.
Ranked #4 on Traffic Prediction on PeMS07
no code implementations • 8 Dec 2023 • Jinjing Zhu, Feiyang Ye, Qiao Xiao, Pengxin Guo, Yu Zhang, Qiang Yang
Specifically, the proposed LIWUDA method constructs a weight network to assign weights to each instance based on its probability of belonging to common classes, and designs Weighted Optimal Transport (WOT) for domain alignment by leveraging instance weights.
2 code implementations • British Machine Vision Conference 2022 • Pengxin Guo, Jinjing Zhu, Yu Zhang
To solve this problem, we propose a Selective Partial Domain Adaptation (SPDA) method, which selects useful data for the adaptation to the target domain.
Ranked #1 on Partial Domain Adaptation on VisDA2017
no code implementations • 15 Jan 2022 • Xiyu Wang, Pengxin Guo, Yu Zhang
Specifically, in BCAT, we design a weight-sharing quadruple-branch transformer with a bidirectional cross-attention mechanism to learn domain-invariant feature representations.
no code implementations • 12 Sep 2021 • Zhixiong Yue, Pengxin Guo, Yu Zhang
Base on the PC function, we propose a new method called Domain Adaptation by Maximizing Population Correlation (DAMPC) to learn a domain-invariant feature representation for DA.
no code implementations • NeurIPS 2021 • Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, Yu Zhang
Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.
no code implementations • 19 Nov 2020 • Pengxin Guo, Yuancheng Xu, Baijiong Lin, Yu Zhang
More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders.
1 code implementation • 12 Feb 2020 • Pengxin Guo, Chang Deng, Linjie Xu, Xiaonan Huang, Yu Zhang
The proposed feature augmentation strategy can be used in many deep multi-task learning models.