Search Results for author: Xinchen Liu

Found 17 papers, 8 papers with code

HumanNeRF-SE: A Simple yet Effective Approach to Animate HumanNeRF with Diverse Poses

no code implementations4 Dec 2023 Caoyuan Ma, Yu-Lun Liu, Zhixiang Wang, Wu Liu, Xinchen Liu, Zheng Wang

We present HumanNeRF-SE, which can synthesize diverse novel pose images with simple input.

SigFormer: Sparse Signal-Guided Transformer for Multi-Modal Human Action Segmentation

1 code implementation29 Nov 2023 Qi Liu, Xinchen Liu, Kun Liu, Xiaoyan Gu, Wu Liu

Nowadays, the majority of approaches concentrate on the fusion of dense signals (i. e., RGB, optical flow, and depth maps).

Action Segmentation Optical Flow Estimation +1

Parsing is All You Need for Accurate Gait Recognition in the Wild

1 code implementation31 Aug 2023 Jinkai Zheng, Xinchen Liu, Shuai Wang, Lihao Wang, Chenggang Yan, Wu Liu

Furthermore, due to the lack of suitable datasets, we build the first parsing-based dataset for gait recognition in the wild, named Gait3D-Parsing, by extending the large-scale and challenging Gait3D dataset.

Gait Recognition in the Wild Human Parsing

REMOT: A Region-to-Whole Framework for Realistic Human Motion Transfer

no code implementations1 Sep 2022 Quanwei Yang, Xinchen Liu, Wu Liu, Hongtao Xie, Xiaoyan Gu, Lingyun Yu, Yongdong Zhang

Human Video Motion Transfer (HVMT) aims to, given an image of a source person, generate his/her video that imitates the motion of the driving person.

Delving into the Frequency: Temporally Consistent Human Motion Transfer in the Fourier Space

no code implementations1 Sep 2022 Guang Yang, Wu Liu, Xinchen Liu, Xiaoyan Gu, Juan Cao, Jintao Li

To close the frequency gap between the natural and synthetic videos, we propose a novel Frequency-based human MOtion TRansfer framework, named FreMOTR, which can effectively mitigate the spatial artifacts and the temporal inconsistency of the synthesized videos.

DeepFake Detection Face Swapping

Gait Recognition in the Wild with Multi-hop Temporal Switch

1 code implementation1 Sep 2022 Jinkai Zheng, Xinchen Liu, Xiaoyan Gu, Yaoqi Sun, Chuang Gan, Jiyong Zhang, Wu Liu, Chenggang Yan

Current methods that obtain state-of-the-art performance on in-the-lab benchmarks achieve much worse accuracy on the recently proposed in-the-wild datasets because these methods can hardly model the varied temporal dynamics of gait sequences in unconstrained scenes.

Gait Recognition in the Wild

MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point Cloud Action Recognition

no code implementations1 Sep 2022 Xiaodong Chen, Wu Liu, Xinchen Liu, Yongdong Zhang, Jungong Han, Tao Mei

In DestFormer, the spatial and temporal dimensions of the 4D point cloud videos are decoupled to achieve efficient self-attention for learning both long-term and short-term features.

Action Recognition

Gait Recognition in the Wild with Dense 3D Representations and A Benchmark

1 code implementation CVPR 2022 Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei

Based on Gait3D, we comprehensively compare our method with existing gait recognition approaches, which reflects the superior performance of our framework and the potential of 3D representations for gait recognition in the wild.

Gait Recognition in the Wild

Part-level Action Parsing via a Pose-guided Coarse-to-Fine Framework

no code implementations9 Mar 2022 Xiaodong Chen, Xinchen Liu, Wu Liu, Kun Liu, Dong Wu, Yongdong Zhang, Tao Mei

Therefore, researchers start to focus on a new task, Part-level Action Parsing (PAP), which aims to not only predict the video-level action but also recognize the frame-level fine-grained actions or interactions of body parts for each person in the video.

Action Parsing Action Recognition

A Baseline Framework for Part-level Action Parsing and Action Recognition

no code implementations7 Oct 2021 Xiaodong Chen, Xinchen Liu, Kun Liu, Wu Liu, Tao Mei

This technical report introduces our 2nd place solution to Kinetics-TPS Track on Part-level Action Parsing in ICCV DeeperAction Workshop 2021.

Action Parsing Action Recognition +1

TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition

1 code implementation9 Feb 2021 Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, XiaoPing Zhang, Tao Mei

Despite significant improvement in gait recognition with deep learning, existing studies still neglect a more practical but challenging scenario -- unsupervised cross-domain gait recognition which aims to learn a model on a labeled dataset then adapts it to an unlabeled dataset.

Gait Recognition

SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine Reconstruction with Self-Projection Optimization

1 code implementation8 Dec 2020 Xinhai Liu, Xinchen Liu, Yu-Shen Liu, Zhizhong Han

The task of point cloud upsampling aims to acquire dense and uniform point sets from sparse and irregular point sets.

FastReID: A Pytorch Toolbox for General Instance Re-identification

3 code implementations4 Jun 2020 Lingxiao He, Xingyu Liao, Wu Liu, Xinchen Liu, Peng Cheng, Tao Mei

General Instance Re-identification is a very important task in the computer vision, which can be widely used in many practical applications, such as person/vehicle re-identification, face recognition, wildlife protection, commodity tracing, and snapshop, etc.. To meet the increasing application demand for general instance re-identification, we present FastReID as a widely used software system in JD AI Research.

Face Recognition Image Retrieval +2

Multi-Granularity Reasoning for Social Relation Recognition from Images

no code implementations10 Jan 2019 Meng Zhang, Xinchen Liu, Wu Liu, Anfu Zhou, Huadong Ma, Tao Mei

To bridge the domain gap, we propose a Multi-Granularity Reasoning framework for social relation recognition from images.

PVSS: A Progressive Vehicle Search System for Video Surveillance Networks

no code implementations10 Jan 2019 Xinchen Liu, Wu Liu, Huadong Ma, Shuangqun Li

In this paper, a Progressive Vehicle Search System, named as PVSS, is designed to solve the above problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.