Search Results for author: Yunfan Liu

Found 14 papers, 4 papers with code

VMamba: Visual State Space Model

2 code implementations18 Jan 2024 Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, YaoWei Wang, Qixiang Ye, Yunfan Liu

Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) stand as the two most popular foundation models for visual representation learning.

Computational Efficiency Representation Learning

Spatial Transform Decoupling for Oriented Object Detection

1 code implementation21 Aug 2023 Hongtian Yu, Yunjie Tian, Qixiang Ye, Yunfan Liu

Vision Transformers (ViTs) have achieved remarkable success in computer vision tasks.

 Ranked #1 on Object Detection In Aerial Images on HRSC2016 (using extra training data)

Object object-detection +2

3D-Aware Adversarial Makeup Generation for Facial Privacy Protection

no code implementations26 Jun 2023 Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong

The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.

Face Recognition Face Verification

Semantic-aware One-shot Face Re-enactment with Dense Correspondence Estimation

no code implementations23 Nov 2022 Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan

One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.

Disentanglement Generative Adversarial Network

GAN-based Facial Attribute Manipulation

no code implementations23 Oct 2022 Yunfan Liu, Qi Li, Qiyao Deng, Zhenan Sun, Ming-Hsuan Yang

Facial Attribute Manipulation (FAM) aims to aesthetically modify a given face image to render desired attributes, which has received significant attention due to its broad practical applications ranging from digital entertainment to biometric forensics.

Attribute

Style Intervention: How to Achieve Spatial Disentanglement with Style-based Generators?

no code implementations19 Nov 2020 Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan

Generative Adversarial Networks (GANs) with style-based generators (e. g. StyleGAN) successfully enable semantic control over image synthesis, and recent studies have also revealed that interpretable image translations could be obtained by modifying the latent code.

Attribute Disentanglement +2

A3GAN: An Attribute-aware Attentive Generative Adversarial Network for Face Aging

no code implementations15 Nov 2019 Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan

Face aging, which aims at aesthetically rendering a given face to predict its future appearance, has received significant research attention in recent years.

Attribute Generative Adversarial Network

Age Progression and Regression with Spatial Attention Modules

no code implementations6 Mar 2019 Qi Li, Yunfan Liu, Zhenan Sun

Age progression and regression refers to aesthetically render-ing a given face image to present effects of face aging and rejuvenation, respectively.

regression Translation

Joint Iris Segmentation and Localization Using Deep Multi-task Learning Framework

1 code implementation31 Jan 2019 Caiyong Wang, Yuhao Zhu, Yunfan Liu, Ran He, Zhenan Sun

In this paper, we propose a deep multi-task learning framework, named as IrisParseNet, to exploit the inherent correlations between pupil, iris and sclera to boost up the performance of iris segmentation and localization in a unified model.

Iris Segmentation Multi-Task Learning +1

Attribute-aware Face Aging with Wavelet-based Generative Adversarial Networks

no code implementations CVPR 2019 Yunfan Liu, Qi Li, Zhenan Sun

Since it is difficult to collect face images of the same subject over a long range of age span, most existing face aging methods resort to unpaired datasets to learn age mappings.

Attribute

Learning to Detect Human-Object Interactions

no code implementations17 Feb 2017 Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, Jia Deng

We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them.

General Classification Human-Object Interaction Detection +1

Combining Data-driven and Model-driven Methods for Robust Facial Landmark Detection

1 code implementation30 Nov 2016 Hongwen Zhang, Qi Li, Zhenan Sun, Yunfan Liu

This Estimation-Correction-Tuning process perfectly combines the advantages of the global robustness of data-driven method (FCN), outlier correction capability of model-driven method (PDM) and non-parametric optimization of RLMS.

Facial Landmark Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.