Search Results for author: Ye Pan

Found 13 papers, 3 papers with code

EmoFace: Audio-driven Emotional 3D Face Animation

1 code implementation17 Jul 2024 Chang Liu, Qunfen Lin, Zijiao Zeng, Ye Pan

Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements, while maintaining accurate lip synchronization.

3D Face Animation

Cost-Effective RF Fingerprinting Based on Hybrid CVNN-RF Classifier with Automated Multi-Dimensional Early-Exit Strategy

no code implementations21 Jun 2024 Jiayan Gan, Zhixing Du, Qiang Li, Huaizong Shao, Jingran Lin, Ye Pan, Zhongyi Wen, Shafei Wang

However, DL algorithms face the computational cost problem as the difficulty of the RFF task and the size of the DNN have increased dramatically.

Scheduling

EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

no code implementations2 Apr 2024 Shuai Tan, Bin Ji, Mengxiao Bi, Ye Pan

Achieving disentangled control over multiple facial motions and accommodating diverse input modalities greatly enhances the application and entertainment of the talking head generation.

Disentanglement Talking Head Generation

LR-FPN: Enhancing Remote Sensing Object Detection with Location Refined Feature Pyramid Network

no code implementations2 Apr 2024 Hanqian Li, Ruinan Zhang, Ye Pan, Junchi Ren, Fei Shen

To address this, we propose a novel location refined feature pyramid network (LR-FPN) to enhance the extraction of shallow positional information and facilitate fine-grained context interaction.

Object object-detection +1

FlowVQTalker: High-Quality Emotional Talking Face Generation through Normalizing Flow and Quantization

no code implementations CVPR 2024 Shuai Tan, Bin Ji, Ye Pan

Specifically, we develop a flow-based coefficient generator that encodes the dynamics of facial emotion into a multi-emotion-class latent space represented as a mixture distribution.

Quantization Talking Face Generation

Style2Talker: High-Resolution Talking Head Generation with Emotion Style and Art Style

no code implementations11 Mar 2024 Shuai Tan, Bin Ji, Ye Pan

Although automatically animating audio-driven talking heads has recently received growing interest, previous efforts have mainly concentrated on achieving lip synchronization with the audio, neglecting two crucial elements for generating expressive videos: emotion style and art style.

Talking Face Generation Talking Head Generation

Say Anything with Any Style

no code implementations11 Mar 2024 Shuai Tan, Bin Ji, Yu Ding, Ye Pan

To adapt to different speaking styles, we steer clear of employing a universal network by exploring an elaborate HyperStyle to produce the style-specific weights offset for the style branch.

HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks

no code implementations19 Apr 2023 Zhuo Chen, Xudong Xu, Yichao Yan, Ye Pan, Wenhan Zhu, Wayne Wu, Bo Dai, Xiaokang Yang

While the use of 3D-aware GANs bypasses the requirement of 3D data, we further alleviate the necessity of style images with the CLIP model being the stylization guidance.

Attribute

Instant Photorealistic Neural Radiance Fields Stylization

2 code implementations29 Mar 2023 Shaoxu Li, Ye Pan

Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.

Image Generation Image Stylization +1

Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation

no code implementations28 Mar 2023 Yuhao Cheng, Yichao Yan, Wenhan Zhu, Ye Pan, Bowen Pan, Xiaokang Yang

Head generation with diverse identities is an important task in computer vision and computer graphics, widely used in multimedia applications.

Interactive Geometry Editing of Neural Radiance Fields

1 code implementation21 Mar 2023 Shaoxu Li, Ye Pan

We use two proxy cages(inner cage and outer cage) to edit a scene.

3D geometry

EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation

no code implementations ICCV 2023 Shuai Tan, Bin Ji, Ye Pan

During training, the emotion embedding and mouth features are used as keys, and the corresponding expression features are used as values to create key-value pairs stored in the proposed Motion Memory Net.

Talking Face Generation

Latency Minimization for Multiuser Computation Offloading in Fog-Radio Access Networks

no code implementations20 Jul 2019 Wei zhang, Shafei Wang, Ye Pan, Qiang Li, Jingran Lin, Xiaoxiao Wu

This paper considers computation offloading in fog-radio access networks (F-RAN), where multiple user equipments (UEs) offload their computation tasks to the F-RAN through a number of fog nodes.

Cannot find the paper you are looking for? You can Submit a new open access paper.