Search Results for author: Zhifeng Xie

Found 7 papers, 1 papers with code

GaussianBody: Clothed Human Reconstruction via 3d Gaussian Splatting

no code implementations18 Jan 2024 Mengtian Li, Shengxiang Yao, Zhifeng Xie, Keyu Chen

In this work, we propose a novel clothed human reconstruction method called GaussianBody, based on 3D Gaussian Splatting.

Hierarchical Fashion Design with Multi-stage Diffusion Models

no code implementations15 Jan 2024 Zhifeng Xie, Hao Li, Huiming Ding, Mengtian Li, Ying Cao

Cross-modal fashion synthesis and editing offer intelligent support to fashion designers by enabling the automatic generation and local modification of design drafts. While current diffusion models demonstrate commendable stability and controllability in image synthesis, they still face significant challenges in generating fashion design from abstract design elements and fine-grained editing. Abstract sensory expressions, \eg office, business, and party, form the high-level design concepts, while measurable aspects like sleeve length, collar type, and pant length are considered the low-level attributes of clothing. Controlling and editing fashion images using lengthy text descriptions poses a difficulty. In this paper, we propose HieraFashDiff, a novel fashion design method using the shared multi-stage diffusion model encompassing high-level design concepts and low-level clothing attributes in a hierarchical structure. Specifically, we categorized the input text into different levels and fed them in different time step to the diffusion model according to the criteria of professional clothing designers. HieraFashDiff allows designers to add low-level attributes after high-level prompts for interactive editing incrementally. In addition, we design a differentiable loss function in the sampling process with a mask to keep non-edit areas. Comprehensive experiments performed on our newly conducted Hierarchical fashion dataset, demonstrate that our proposed method outperforms other state-of-the-art competitors.

Fashion Synthesis Image Generation

High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space Learning

no code implementations CVPR 2023 Chao Xu, Junwei Zhu, Jiangning Zhang, Yue Han, Wenqing Chu, Ying Tai, Chengjie Wang, Zhifeng Xie, Yong liu

Specifically, we supplement the emotion style in text prompts and use an Aligned Multi-modal Emotion encoder to embed the text, image, and audio emotion modality into a unified space, which inherits rich semantic prior from CLIP.

Talking Face Generation

Joint Representation Learning for Text and 3D Point Cloud

no code implementations18 Jan 2023 Rui Huang, Xuran Pan, Henry Zheng, Haojun Jiang, Zhifeng Xie, Shiji Song, Gao Huang

During the pre-training stage, we establish the correspondence of images and point clouds based on the readily available RGB-D data and use contrastive learning to align the image and point cloud representations.

Contrastive Learning Instance Segmentation +4

Boosting Night-time Scene Parsing with Learnable Frequency

1 code implementation30 Aug 2022 Zhifeng Xie, Sen Wang, Ke Xu, Zhizhong Zhang, Xin Tan, Yuan Xie, Lizhuang Ma

Based on this, we propose to exploit the image frequency distributions for night-time scene parsing.

Autonomous Driving Scene Parsing

Learning To Restore 3D Face From In-the-Wild Degraded Images

no code implementations CVPR 2022 Zhenyu Zhang, Yanhao Ge, Ying Tai, Xiaoming Huang, Chengjie Wang, Hao Tang, Dongjin Huang, Zhifeng Xie

In-the-wild 3D face modelling is a challenging problem as the predicted facial geometry and texture suffer from a lack of reliable clues or priors, when the input images are degraded.

3D Face Modelling Face Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.