Search Results for author: Lincheng Li

Found 30 papers, 10 papers with code

Affective Behaviour Analysis via Integrating Multi-Modal Knowledge

no code implementations16 Mar 2024 Wei zhang, Feng Qiu, Chen Liu, Lincheng Li, Heming Du, Tiancheng Guo, Xin Yu

Affective Behavior Analysis aims to facilitate technology emotionally smart, creating a world where devices can understand and react to our emotions as humans do.

Text-Guided 3D Face Synthesis -- From Generation to Editing

no code implementations1 Dec 2023 Yunjie Wu, Yapeng Meng, Zhipeng Hu, Lincheng Li, Haoqian Wu, Kun Zhou, Weiwei Xu, Xin Yu

In the editing stage, we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.

Face Generation Texture Synthesis

High-Quality 3D Face Reconstruction with Affine Convolutional Networks

no code implementations22 Oct 2023 Zhiqian Lin, Jiangke Lin, Lincheng Li, Yi Yuan, Zhengxia Zou

In our method, an affine transformation matrix is learned from the affine convolution layer for each spatial location of the feature maps.

3D Face Reconstruction

CBARF: Cascaded Bundle-Adjusting Neural Radiance Fields from Imperfect Camera Poses

no code implementations15 Oct 2023 Hongyu Fu, Xin Yu, Lincheng Li, Li Zhang

Existing volumetric neural rendering techniques, such as Neural Radiance Fields (NeRF), face limitations in synthesizing high-quality novel views when the camera poses of input images are imperfect.

3D Reconstruction Neural Rendering +1

EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior

1 code implementation25 Aug 2023 Zhipeng Hu, Minda Zhao, Chaoyi Zhao, Xinyue Liang, Lincheng Li, Zeng Zhao, Changjie Fan, Xiaowei Zhou, Xin Yu

This limitation leads to the Janus problem, where multi-faced 3D models are generated under the guidance of such diffusion models.

Text to 3D

BAVS: Bootstrapping Audio-Visual Segmentation by Integrating Foundation Knowledge

no code implementations20 Aug 2023 Chen Liu, Peike Li, Hu Zhang, Lincheng Li, Zi Huang, Dadong Wang, Xin Yu

In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner.

Audio Classification Segmentation

Audio-Visual Segmentation by Exploring Cross-Modal Mutual Semantics

no code implementations31 Jul 2023 Chen Liu, Peike Li, Xingqun Qi, Hu Zhang, Lincheng Li, Dadong Wang, Xin Yu

However, we observed that prior arts are prone to segment a certain salient object in a video regardless of the audio information.

Object Segmentation +1

EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation

1 code implementation30 May 2023 Xingqun Qi, Chen Liu, Lincheng Li, Jie Hou, Haoran Xin, Xin Yu

In this work, we propose EmotionGesture, a novel framework for synthesizing vivid and diverse emotional co-speech 3D gestures from audio.

Gesture Generation

Diverse 3D Hand Gesture Prediction from Body Dynamics by Bilateral Hand Disentanglement

1 code implementation CVPR 2023 Xingqun Qi, Chen Liu, Muyi Sun, Lincheng Li, Changjie Fan, Xin Yu

Considering the asymmetric gestures and motions of two hands, we introduce a Spatial-Residual Memory (SRM) module to model spatial interaction between the body and each hand by residual learning.

Disentanglement

Zero-Shot Text-to-Parameter Translation for Game Character Auto-Creation

no code implementations CVPR 2023 Rui Zhao, Wei Li, Zhipeng Hu, Lincheng Li, Zhengxia Zou, Zhenwei Shi, Changjie Fan

In our method, taking the power of large-scale pre-trained multi-modal CLIP and neural rendering, T2P searches both continuous facial parameters and discrete facial parameters in a unified framework.

3D Generation Face Model +3

Object-Goal Visual Navigation via Effective Exploration of Relations Among Historical Navigation States

no code implementations CVPR 2023 Heming Du, Lincheng Li, Zi Huang, Xin Yu

In HiNL, we propose a History-aware State Estimation (HaSE) module to alleviate the impacts of dominant historical states on the current state estimation.

valid Visual Navigation

Towards Unbiased Volume Rendering of Neural Implicit Surfaces With Geometry Priors

no code implementations CVPR 2023 Yongqiang Zhang, Zhipeng Hu, Haoqian Wu, Minda Zhao, Lincheng Li, Zhengxia Zou, Changjie Fan

In this paper, we argue that this limited accuracy is due to the bias of their volume rendering strategies, especially when the viewing direction is close to be tangent to the surface.

Surface Reconstruction

FlowFace: Semantic Flow-guided Shape-aware Face Swapping

no code implementations6 Dec 2022 Hao Zeng, Wei zhang, Changjie Fan, Tangjie Lv, Suzhen Wang, Zhimeng Zhang, Bowen Ma, Lincheng Li, Yu Ding, Xin Yu

Unlike most previous methods that focus on transferring the source inner facial features but neglect facial contours, our FlowFace can transfer both of them to a target face, thus leading to more realistic face swapping.

Face Swapping

Uncertainty-aware Gait Recognition via Learning from Dirichlet Distribution-based Evidence

no code implementations15 Nov 2022 Beibei Lin, Chen Liu, Ming Wang, Lincheng Li, Shunli Zhang, Robby T. Tan, Xin Yu

Existing gait recognition frameworks retrieve an identity in the gallery based on the distance between a probe sample and the identities in the gallery.

Gait Recognition Retrieval

Facial Action Units Detection Aided by Global-Local Expression Embedding

no code implementations25 Oct 2022 Zhipeng Hu, Wei zhang, Lincheng Li, Yu Ding, Wei Chen, Zhigang Deng, Xin Yu

We find that AUs and facial expressions are highly associated, and existing facial expression datasets often contain a large number of identities.

3D Face Reconstruction

GaitGL: Learning Discriminative Global-Local Feature Representations for Gait Recognition

2 code implementations2 Aug 2022 Beibei Lin, Shunli Zhang, Ming Wang, Lincheng Li, Xin Yu

GFR extractor aims to extract contextual information, e. g., the relationship among various body parts, and the mask-based LFR extractor is presented to exploit the detailed posture changes of local regions.

Gait Recognition

NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction with Implicit Neural Representations

no code implementations22 Jul 2022 Yunlong Ran, Jing Zeng, Shibo He, Lincheng Li, Yingfeng Chen, Gimhee Lee, Jiming Chen, Qi Ye

In this paper, we explore for the first time the possibility of using implicit neural representations for autonomous 3D scene reconstruction by addressing two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of a hand-crafting one.

3D Reconstruction 3D Scene Reconstruction

GaitStrip: Gait Recognition via Effective Strip-based Feature Representations and Multi-Level Framework

1 code implementation8 Mar 2022 Ming Wang, Beibei Lin, Xianda Guo, Lincheng Li, Zheng Zhu, Jiande Sun, Shunli Zhang, Xin Yu

ECM consists of the Spatial-Temporal feature extractor (ST), the Frame-Level feature extractor (FL) and SPB, and has two obvious advantages: First, each branch focuses on a specific representation, which can be used to improve the robustness of the network.

Gait Recognition

Learning Implicit Body Representations from Double Diffusion Based Neural Radiance Fields

no code implementations23 Dec 2021 Guangming Yao, Hongzhi Wu, Yi Yuan, Lincheng Li, Kun Zhou, Xin Yu

In this paper, we present a novel double diffusion based neural radiance field, dubbed DD-NeRF, to reconstruct human body geometry and render the human body appearance in novel views from a sparse set of images.

Novel View Synthesis

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning

no code implementations6 Dec 2021 Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu

Hence, we propose a novel one-shot talking face generation framework by exploring consistent correlations between audio and visual motions from a specific speaker and then transferring audio-driven motion fields to a reference image.

Talking Face Generation

Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion

1 code implementation20 Jul 2021 Suzhen Wang, Lincheng Li, Yu Ding, Changjie Fan, Xin Yu

As this keypoint based representation models the motions of facial regions, head, and backgrounds integrally, our method can better constrain the spatial and temporal consistency of the generated videos.

Image Generation Talking Head Generation

Prior Aided Streaming Network for Multi-task Affective Recognitionat the 2nd ABAW2 Competition

no code implementations8 Jul 2021 Wei zhang, Zunhu Guo, Keyu Chen, Lincheng Li, Zhimeng Zhang, Yu Ding

Automatic affective recognition has been an important research topic in human computer interaction (HCI) area.

Emotion Recognition

Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset

1 code implementation CVPR 2021 Zhimeng Zhang, Lincheng Li, Yu Ding, Changjie Fan

To synthesize high-definition videos, we build a large in-the-wild high-resolution audio-visual dataset and propose a novel flow-guided talking face generation framework.

Talking Face Generation

Learning a Deep Motion Interpolation Network for Human Skeleton Animations

no code implementations Computer animation & Virtual worlds 2021 Chi Zhou, Zhangjiong Lai, Suzhen Wang, Lincheng Li, Xiaohan Sun, Yu Ding

In this work, we propose a novel carefully designed deep learning framework, named deep motion interpolation network (DMIN), to learn human movement habits from a real dataset and then to perform the interpolation function specific for human motions.

Motion Interpolation

Cannot find the paper you are looking for? You can Submit a new open access paper.