Search Results for author: Yuecong Xu

Found 25 papers, 5 papers with code

Diffusion Model is a Good Pose Estimator from 3D RF-Vision

no code implementations24 Mar 2024 Junqiao Fan, Jianfei Yang, Yuecong Xu, Lihua Xie

However, the mmWave radar has a limited resolution with severe noise, leading to inaccurate and inconsistent human pose estimation.

Pose Estimation

Reliable Spatial-Temporal Voxels For Multi-Modal Test-Time Adaptation

no code implementations11 Mar 2024 Haozhi Cao, Yuecong Xu, Jianfei Yang, Pengyu Yin, Xingyu Ji, Shenghai Yuan, Lihua Xie

Multi-modal test-time adaptation (MM-TTA) is proposed to adapt models to an unlabeled target domain by leveraging the complementary multi-modal inputs in an online manner.

Test-time Adaptation

MaskFi: Unsupervised Learning of WiFi and Vision Representations for Multimodal Human Activity Recognition

no code implementations29 Feb 2024 Jianfei Yang, Shijie Tang, Yuecong Xu, Yunjiao Zhou, Lihua Xie

Benefiting from our unsupervised learning procedure, the network requires only a small amount of annotated data for finetuning and can adapt to the new environment with better performance.

Human Activity Recognition Representation Learning

Graph-Aware Contrasting for Multivariate Time-Series Classification

1 code implementation11 Sep 2023 Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

As MTS data typically originate from multiple sensors, ensuring spatial consistency becomes essential for the overall performance of contrastive learning on MTS data.

Classification Contrastive Learning +3

Fully-Connected Spatial-Temporal Graph for Multivariate Time-Series Data

1 code implementation11 Sep 2023 Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT.

graph construction Time Series

Can We Evaluate Domain Adaptation Models Without Target-Domain Labels?

no code implementations30 May 2023 Jianfei Yang, Hanjie Qian, Yuecong Xu, Kai Wang, Lihua Xie

Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.

Unsupervised Domain Adaptation

Augmenting and Aligning Snippets for Few-Shot Video Domain Adaptation

no code implementations ICCV 2023 Yuecong Xu, Jianfei Yang, Yunjiao Zhou, Zhenghua Chen, Min Wu, XiaoLi Li

We thus consider a more realistic \textit{Few-Shot Video-based Domain Adaptation} (FSVDA) scenario where we adapt video models with only a few target video samples.

Action Recognition Unsupervised Domain Adaptation

Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation

no code implementations18 Mar 2023 Xiyu Wang, Yuecong Xu, Jianfei Yang, Bihan Wen, Alex C. Kot

The second module compares the outputs of augmented data from the current model to the outputs of weakly augmented data from the source model, forming a novel consistency regularization on the model to alleviate the accumulation of prediction errors.

Autonomous Driving Self-Knowledge Distillation +1

Video Unsupervised Domain Adaptation with Deep Learning: A Comprehensive Survey

no code implementations17 Nov 2022 Yuecong Xu, Haozhi Cao, Zhenghua Chen, XiaoLi Li, Lihua Xie, Jianfei Yang

To tackle performance degradation and address concerns in high video annotation cost uniformly, the video unsupervised domain adaptation (VUDA) is introduced to adapt video models from the labeled source domain to the unlabeled target domain by alleviating video domain shift, improving the generalizability and portability of video models.

Action Recognition Unsupervised Domain Adaptation

Leveraging Endo- and Exo-Temporal Regularization for Black-box Video Domain Adaptation

no code implementations10 Aug 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.

Action Recognition Unsupervised Domain Adaptation

Calibrating Class Weights with Multi-Modal Information for Partial Video Domain Adaptation

no code implementations13 Apr 2022 Xiyu Wang, Yuecong Xu, Kezhi Mao, Jianfei Yang

It utilizes a novel class weight calibration method to alleviate the negative transfer caused by incorrect class weights.

Domain Adaptation Video Classification

Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition

1 code implementation9 Mar 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Wu Min, Zhenghua Chen

Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments.

Action Recognition Source-Free Domain Adaptation +1

Going Deeper into Recognizing Actions in Dark Environments: A Comprehensive Benchmark Study

no code implementations19 Feb 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu

While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.

Action Recognition Autonomous Driving

Self-Supervised Video Representation Learning by Video Incoherence Detection

no code implementations26 Sep 2021 Haozhi Cao, Yuecong Xu, Jianfei Yang, Kezhi Mao, Lihua Xie, Jianxiong Yin, Simon See

This paper introduces a novel self-supervised method that leverages incoherence detection for video representation learning.

Action Recognition Contrastive Learning +3

Multi-Source Video Domain Adaptation with Temporal Attentive Moment Alignment

no code implementations21 Sep 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Min Wu, Rui Zhao, Zhenghua Chen

Multi-Source Domain Adaptation (MSDA) is a more practical domain adaptation scenario in real-world scenarios.

Unsupervised Domain Adaptation

Partial Video Domain Adaptation with Partial Adversarial Temporal Attentive Network

no code implementations ICCV 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Qi Li, Kezhi Mao, Zhenghua Chen

For videos, such negative transfer could be triggered by both spatial and temporal features, which leads to a more challenging Partial Video Domain Adaptation (PVDA) problem.

Partial Domain Adaptation

PNL: Efficient Long-Range Dependencies Extraction with Pyramid Non-Local Module for Action Recognition

no code implementations9 Jun 2020 Yuecong Xu, Haozhi Cao, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See

Empirical results prove the effectiveness and efficiency of our PNL module, which achieves state-of-the-art performance of 83. 09% on the Mini-Kinetics dataset, with decreased computation cost compared to the non-local block.

Action Recognition

ARID: A New Dataset for Recognizing Action in the Dark

1 code implementation6 Jun 2020 Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin, Simon See

We bridge the gap of the lack of data for this task by collecting a new dataset: the Action Recognition in the Dark (ARID) dataset.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.