no code implementations • 24 Mar 2024 • Junqiao Fan, Jianfei Yang, Yuecong Xu, Lihua Xie
However, the mmWave radar has a limited resolution with severe noise, leading to inaccurate and inconsistent human pose estimation.
no code implementations • 11 Mar 2024 • Haozhi Cao, Yuecong Xu, Jianfei Yang, Pengyu Yin, Xingyu Ji, Shenghai Yuan, Lihua Xie
Multi-modal test-time adaptation (MM-TTA) is proposed to adapt models to an unlabeled target domain by leveraging the complementary multi-modal inputs in an online manner.
no code implementations • 29 Feb 2024 • Jianfei Yang, Shijie Tang, Yuecong Xu, Yunjiao Zhou, Lihua Xie
Benefiting from our unsupervised learning procedure, the network requires only a small amount of annotated data for finetuning and can adapt to the new environment with better performance.
no code implementations • 17 Nov 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
In this paper, we propose SEnsor Alignment (SEA) for MTS-UDA, aiming to reduce domain discrepancy at both the local and global sensor levels.
1 code implementation • 21 Sep 2023 • Haozhi Cao, Yuecong Xu, Jianfei Yang, Pengyu Yin, Shenghai Yuan, Lihua Xie
In this work, we propose Multi-modal Prior Aided (MoPA) domain adaptation to improve the performance of rare objects.
1 code implementation • 11 Sep 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT.
1 code implementation • 11 Sep 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
As MTS data typically originate from multiple sensors, ensuring spatial consistency becomes essential for the overall performance of contrastive learning on MTS data.
no code implementations • 30 May 2023 • Jianfei Yang, Hanjie Qian, Yuecong Xu, Kai Wang, Lihua Xie
Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.
1 code implementation • NeurIPS 2023 • Jianfei Yang, He Huang, Yunjiao Zhou, Xinyan Chen, Yuecong Xu, Shenghai Yuan, Han Zou, Chris Xiaoxuan Lu, Lihua Xie
Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks.
no code implementations • 18 Mar 2023 • Xiyu Wang, Yuecong Xu, Jianfei Yang, Bihan Wen, Alex C. Kot
The second module compares the outputs of augmented data from the current model to the outputs of weakly augmented data from the source model, forming a novel consistency regularization on the model to alleviate the accumulation of prediction errors.
no code implementations • ICCV 2023 • Haozhi Cao, Yuecong Xu, Jianfei Yang, Pengyu Yin, Shenghai Yuan, Lihua Xie
In this paper, we explore Multi-Modal Continual Test-Time Adaptation (MM-CTTA) as a new extension of CTTA for 3D semantic segmentation.
no code implementations • ICCV 2023 • Yuecong Xu, Jianfei Yang, Yunjiao Zhou, Zhenghua Chen, Min Wu, XiaoLi Li
We thus consider a more realistic \textit{Few-Shot Video-based Domain Adaptation} (FSVDA) scenario where we adapt video models with only a few target video samples.
no code implementations • 17 Nov 2022 • Yuecong Xu, Haozhi Cao, Zhenghua Chen, XiaoLi Li, Lihua Xie, Jianfei Yang
To tackle performance degradation and address concerns in high video annotation cost uniformly, the video unsupervised domain adaptation (VUDA) is introduced to adapt video models from the labeled source domain to the unlabeled target domain by alleviating video domain shift, improving the generalizability and portability of video models.
no code implementations • 10 Aug 2022 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.
no code implementations • 13 Apr 2022 • Xiyu Wang, Yuecong Xu, Kezhi Mao, Jianfei Yang
It utilizes a novel class weight calibration method to alleviate the negative transfer caused by incorrect class weights.
1 code implementation • 9 Mar 2022 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Wu Min, Zhenghua Chen
Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments.
no code implementations • 19 Feb 2022 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu
While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.
no code implementations • 26 Sep 2021 • Haozhi Cao, Yuecong Xu, Jianfei Yang, Kezhi Mao, Lihua Xie, Jianxiong Yin, Simon See
This paper introduces a novel self-supervised method that leverages incoherence detection for video representation learning.
no code implementations • 21 Sep 2021 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Min Wu, Rui Zhao, Zhenghua Chen
Multi-Source Domain Adaptation (MSDA) is a more practical domain adaptation scenario in real-world scenarios.
no code implementations • 11 Jul 2021 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin, Simon See
Yet correlation features of the same action would differ across domains due to domain shift.
no code implementations • ICCV 2021 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Qi Li, Kezhi Mao, Zhenghua Chen
For videos, such negative transfer could be triggered by both spatial and temporal features, which leads to a more challenging Partial Video Domain Adaptation (PVDA) problem.
no code implementations • 26 Aug 2020 • Haozhi Cao, Yuecong Xu, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See
Temporal feature extraction is an essential technique in video-based action recognition.
no code implementations • 9 Jun 2020 • Yuecong Xu, Haozhi Cao, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See
Empirical results prove the effectiveness and efficiency of our PNL module, which achieves state-of-the-art performance of 83. 09% on the Mini-Kinetics dataset, with decreased computation cost compared to the non-local block.
1 code implementation • 6 Jun 2020 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin, Simon See
We bridge the gap of the lack of data for this task by collecting a new dataset: the Action Recognition in the Dark (ARID) dataset.
no code implementations • 6 May 2020 • Yuecong Xu, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See
Temporal feature extraction is an important issue in video-based action recognition.