Search Results for author: Jianfei Yang

Found 33 papers, 9 papers with code

Mind the Discriminability: Asymmetric Adversarial Domain Adaptation

no code implementations ECCV 2020 Jianfei Yang, Han Zou, Yuxun Zhou, Zhaoyang Zeng, Lihua Xie ()

Adversarial domain adaptation has made tremendous success by learning domain-invariant feature representations.

Domain Adaptation

AirFi: Empowering WiFi-based Passive Human Gesture Recognition to Unseen Environment via Domain Generalization

no code implementations21 Sep 2022 Dazhuo Wang, Jianfei Yang, Wei Cui, Lihua Xie, Sumei Sun

The AirFi is a novel domain generalization framework that learns the critical part of CSI regardless of different environments and generalizes the model to unseen scenarios, which does not require collecting any data for adaptation to the new environment.

Domain Generalization Few-Shot Learning +1

GaitFi: Robust Device-Free Human Identification via WiFi and Vision Multimodal Learning

no code implementations30 Aug 2022 Lang Deng, Jianfei Yang, Shenghai Yuan, Han Zou, Chris Xiaoxuan Lu, Lihua Xie

As an important biomarker for human identification, human gait can be collected at a distance by passive sensors without subject cooperation, which plays an essential role in crime prevention, security detection and other human identification applications.

Gait Recognition

MetaFi: Device-Free Pose Estimation via Commodity WiFi for Metaverse Avatar Simulation

no code implementations22 Aug 2022 Jianfei Yang, Yunjiao Zhou, He Huang, Han Zou, Lihua Xie

Avatar refers to a representative of a physical user in the virtual world that can engage in different activities and interact with other objects in metaverse.

Pose Estimation

EXTERN: Leveraging Endo-Temporal Regularization for Black-box Video Domain Adaptation

no code implementations10 Aug 2022 Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen

To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.

Action Recognition Unsupervised Domain Adaptation

Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors

1 code implementation28 May 2022 Jianfei Yang, Xiangyu Peng, Kai Wang, Zheng Zhu, Jiashi Feng, Lihua Xie, Yang You

Domain Adaptation of Black-box Predictors (DABP) aims to learn a model on an unlabeled target domain supervised by a black-box predictor trained on a source domain.

Domain Adaptation Knowledge Distillation

Reliable Label Correction is a Good Booster When Learning with Extremely Noisy Labels

1 code implementation30 Apr 2022 Kai Wang, Xiangyu Peng, Shuo Yang, Jianfei Yang, Zheng Zhu, Xinchao Wang, Yang You

This paradigm, however, is prone to significant degeneration under heavy label noise, as the number of clean samples is too small for conventional methods to behave well.

Learning with noisy labels

Calibrating Class Weights with Multi-Modal Information for Partial Video Domain Adaptation

no code implementations13 Apr 2022 Xiyu Wang, Yuecong Xu, Kezhi Mao, Jianfei Yang

It utilizes a novel class weight calibration method to alleviate the negative transfer caused by incorrect class weights.

Domain Adaptation Video Classification

AutoFi: Towards Automatic WiFi Human Sensing via Geometric Self-Supervised Learning

no code implementations12 Apr 2022 Jianfei Yang, Xinyan Chen, Han Zou, Dazhuo Wang, Lihua Xie

We believe that the AutoFi takes a huge step toward automatic WiFi sensing without any developer engagement while overcoming the cross-site issue.

Activity Recognition Domain Adaptation +4

RobustSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition

no code implementations4 Apr 2022 Jianfei Yang, Han Zou, Lihua Xie

The results validate that our method works well on wireless human activity recognition and person identification systems.

Adversarial Attack Human Activity Recognition +1

AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications

no code implementations11 Mar 2022 Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong, Kaiwei Sun, Hangzheng Lin, Wanlu Liu, Wanghui Ding, Jianfei Yang, Haoji Hu, Yueling Zhang, Yang Feng, Zeyu Zhao, Huikai Wu, Youyi Zheng, Bing Fang, Zuozhu Liu, Zhihe Zhao

Here, we present a Deep Dental Multimodal Analysis (DDMA) framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model (the most accurate digital dental model), and a fusion model to generate 3D fused crown-root-bone structures with high fidelity and accurate occlusal and dentition information.

Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition

1 code implementation9 Mar 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Wu Min, Zhenghua Chen

Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments.

Action Recognition Unsupervised Domain Adaptation

Going Deeper into Recognizing Actions in Dark Environments: A Comprehensive Benchmark Study

no code implementations19 Feb 2022 Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu

While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.

Action Recognition Autonomous Driving

Shuffle Augmentation of Features from Unlabeled Data for Unsupervised Domain Adaptation

no code implementations28 Jan 2022 Changwei Xu, Jianfei Yang, Haoran Tang, Han Zou, Cheng Lu, Tianshuo Zhang

Unsupervised Domain Adaptation (UDA), a branch of transfer learning where labels for target samples are unavailable, has been widely researched and developed in recent years with the help of adversarially trained models.

Unsupervised Domain Adaptation

Towards Realistic Visual Dubbing with Heterogeneous Sources

no code implementations17 Jan 2022 Tianyi Xie, Liucheng Liao, Cheng Bi, Benlai Tang, Xiang Yin, Jianfei Yang, Mingjie Wang, Jiali Yao, Yang Zhang, Zejun Ma

The task of few-shot visual dubbing focuses on synchronizing the lip movements with arbitrary speech input for any talking head video.

Disentanglement Talking Head Generation

Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation

no code implementations CVPR 2022 Minghui Hu, Yujie Wang, Tat-Jen Cham, Jianfei Yang, P. N. Suganthan

We show that with the help of a content-rich discrete visual codebook from VQ-VAE, the discrete diffusion model can also generate high fidelity images with global context, which compensates for the deficiency of the classical autoregressive model along pixel space.

Denoising Image Inpainting +1

Self-Supervised Video Representation Learning by Video Incoherence Detection

no code implementations26 Sep 2021 Haozhi Cao, Yuecong Xu, Jianfei Yang, Kezhi Mao, Lihua Xie, Jianxiong Yin, Simon See

This paper introduces a novel self-supervised method that leverages incoherence detection for video representation learning.

Action Recognition Contrastive Learning +2

Multi-Source Video Domain Adaptation with Temporal Attentive Moment Alignment

no code implementations21 Sep 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Keyu Wu, Min Wu, Rui Zhao, Zhenghua Chen

Multi-Source Domain Adaptation (MSDA) is a more practical domain adaptation scenario in real-world scenarios.

Unsupervised Domain Adaptation

Partial Video Domain Adaptation with Partial Adversarial Temporal Attentive Network

no code implementations ICCV 2021 Yuecong Xu, Jianfei Yang, Haozhi Cao, Qi Li, Kezhi Mao, Zhenghua Chen

For videos, such negative transfer could be triggered by both spatial and temporal features, which leads to a more challenging Partial Video Domain Adaptation (PVDA) problem.

Partial Domain Adaptation

Suppressing Mislabeled Data via Grouping and Self-Attention

1 code implementation ECCV 2020 Xiaojiang Peng, Kai Wang, Zhaoyang Zeng, Qing Li, Jianfei Yang, Yu Qiao

Specifically, this plug-and-play AFM first leverages a \textit{group-to-attend} module to construct groups and assign attention weights for group-wise samples, and then uses a \textit{mixup} module with the attention weights to interpolate massive noisy-suppressed samples.

Image Classification

PNL: Efficient Long-Range Dependencies Extraction with Pyramid Non-Local Module for Action Recognition

no code implementations9 Jun 2020 Yuecong Xu, Haozhi Cao, Jianfei Yang, Kezhi Mao, Jianxiong Yin, Simon See

Empirical results prove the effectiveness and efficiency of our PNL module, which achieves state-of-the-art performance of 83. 09% on the Mini-Kinetics dataset, with decreased computation cost compared to the non-local block.

Action Recognition

ARID: A New Dataset for Recognizing Action in the Dark

1 code implementation6 Jun 2020 Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin, Simon See

We bridge the gap of the lack of data for this task by collecting a new dataset: the Action Recognition in the Dark (ARID) dataset.

Action Recognition

Towards Stable and Comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training

no code implementations ICLR 2020 Jianfei Yang, Han Zou, Yuxun Zhou, Lihua Xie

The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures.

Domain Adaptation Model Selection

Suppressing Uncertainties for Large-Scale Facial Expression Recognition

2 code implementations CVPR 2020 Kai Wang, Xiaojiang Peng, Jianfei Yang, Shijian Lu, Yu Qiao

Annotating a qualitative large-scale facial expression dataset is extremely difficult due to the uncertainties caused by ambiguous facial expressions, low-quality facial images, and the subjectiveness of annotators.

Facial Expression Recognition

Bootstrap Model Ensemble and Rank Loss for Engagement Intensity Regression

no code implementations8 Jul 2019 Kai Wang, Jianfei Yang, Da Guo, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao

Based on our winner solution last year, we mainly explore head features and body features with a bootstrap strategy and two novel loss functions in this paper.

Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition

1 code implementation10 May 2019 Kai Wang, Xiaojiang Peng, Jianfei Yang, Debin Meng, Yu Qiao

Extensive experiments show that our RAN and region biased loss largely improve the performance of FER with occlusion and variant pose.

Facial Expression Recognition

Kervolutional Neural Networks

6 code implementations CVPR 2019 Chen Wang, Jianfei Yang, Lihua Xie, Junsong Yuan

Convolutional neural networks (CNNs) have enabled the state-of-the-art performance in many computer vision tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.