Search Results for author: Pengxin Guo

Found 8 papers, 3 papers with code

Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting

1 code implementation8 Jan 2024 Pengxin Guo, Pengrong Jin, Ziyue Li, Lei Bai, Yu Zhang

To make the model trained on historical data better adapt to future data in a fully online manner, this paper conducts the first study of the online test-time adaptation techniques for spatial-temporal traffic flow forecasting problems.

Test-time Adaptation Traffic Prediction

A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting

no code implementations8 Dec 2023 Jinjing Zhu, Feiyang Ye, Qiao Xiao, Pengxin Guo, Yu Zhang, Qiang Yang

Specifically, the proposed LIWUDA method constructs a weight network to assign weights to each instance based on its probability of belonging to common classes, and designs Weighted Optimal Transport (WOT) for domain alignment by leveraging instance weights.

Partial Domain Adaptation Universal Domain Adaptation +1

Selective Partial Domain Adaptation

2 code implementations British Machine Vision Conference 2022 Pengxin Guo, Jinjing Zhu, Yu Zhang

To solve this problem, we propose a Selective Partial Domain Adaptation (SPDA) method, which selects useful data for the adaptation to the target domain.

Partial Domain Adaptation

Domain Adaptation via Bidirectional Cross-Attention Transformer

no code implementations15 Jan 2022 Xiyu Wang, Pengxin Guo, Yu Zhang

Specifically, in BCAT, we design a weight-sharing quadruple-branch transformer with a bidirectional cross-attention mechanism to learn domain-invariant feature representations.

Domain Adaptation

Domain Adaptation by Maximizing Population Correlation with Neural Architecture Search

no code implementations12 Sep 2021 Zhixiong Yue, Pengxin Guo, Yu Zhang

Base on the PC function, we propose a new method called Domain Adaptation by Maximizing Population Correlation (DAMPC) to learn a domain-invariant feature representation for DA.

Domain Adaptation Neural Architecture Search

Multi-Objective Meta Learning

no code implementations NeurIPS 2021 Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, Yu Zhang

Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.

Domain Adaptation Few-Shot Learning +2

Multi-Task Adversarial Attack

no code implementations19 Nov 2020 Pengxin Guo, Yuancheng Xu, Baijiong Lin, Yu Zhang

More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders.

Adversarial Attack

Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural Network

1 code implementation12 Feb 2020 Pengxin Guo, Chang Deng, Linjie Xu, Xiaonan Huang, Yu Zhang

The proposed feature augmentation strategy can be used in many deep multi-task learning models.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.