1 code implementation • 10 Feb 2025 • Lirong Wu, Yunfan Liu, Haitao Lin, Yufei Huang, Guojiang Zhao, Zhifeng Gao, Stan Z. Li
For the target antibody, we propose a novel Mutation Explainer to learn mutation preferences, which accounts for the marginal benefit of each mutation per residue.
no code implementations • 14 Dec 2024 • Lirong Wu, Haitao Lin, Yufei Huang, Zhangyang Gao, Cheng Tan, Yunfan Liu, Tailin Wu, Stan Z. Li
Antibodies are Y-shaped proteins that protect the host by binding to specific antigens, and their binding is mainly determined by the Complementary Determining Regions (CDRs) in the antibody.
no code implementations • 11 Dec 2024 • Mu Zhang, Yunfan Liu, Yue Liu, Hongtian Yu, Qixiang Ye
Accurately depicting real-world landscapes in remote sensing (RS) images requires precise alignment between objects and their environment.
1 code implementation • 28 Nov 2024 • Hongda Liu, Yunfan Liu, Min Ren, Hao Wang, Yunlong Wang, Zhenan Sun
In skeleton-based action recognition, a key challenge is distinguishing between actions with similar trajectories of joints due to the lack of image-level details in skeletal representations.
Ranked #1 on
Skeleton Based Action Recognition
on NTU RGB+D 120
1 code implementation • 20 Aug 2024 • Hongtian Yu, Yangu Li, Yunfan Liu, Yunxuan Song, Xiaorui Lyu, Qixiang Ye
To address this issue, we propose Vision Calorimeter (ViC), a data-driven framework which migrates visual object detection techniques to high-energy particle images.
1 code implementation • 20 Jul 2024 • Lirong Wu, Yunfan Liu, Haitao Lin, Yufei Huang, Stan Z. Li
To bridge the gaps between powerful Graph Neural Networks (GNNs) and lightweight Multi-Layer Perceptron (MLPs), GNN-to-MLP Knowledge Distillation (KD) proposes to distill knowledge from a well-trained teacher GNN into a student MLP.
no code implementations • 12 Jun 2024 • Zicheng Liu, Siyuan Li, Li Wang, Zedong Wang, Yunfan Liu, Stan Z. Li
To mitigate the computational complexity in the self-attention mechanism on long sequences, linear attention utilizes computation tricks to achieve linear complexity, while state space models (SSMs) popularize a favorable practice of using non-data-dependent memory pattern, i. e., emphasize the near and neglect the distant, to processing sequences.
1 code implementation • 26 May 2024 • Zhaozhi Wang, Yue Liu, Yunfan Liu, Hongtian Yu, YaoWei Wang, Qixiang Ye, Yunjie Tian
A fundamental problem in learning robust and expressive visual representations lies in efficiently estimating the spatial relationships of visual semantics throughout the entire image.
12 code implementations • 18 Jan 2024 • Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, YaoWei Wang, Qixiang Ye, Jianbin Jiao, Yunfan Liu
At the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module.
1 code implementation • 21 Aug 2023 • Hongtian Yu, Yunjie Tian, Qixiang Ye, Yunfan Liu
Vision Transformers (ViTs) have achieved remarkable success in computer vision tasks.
Ranked #2 on
Object Detection In Aerial Images
on HRSC2016
(using extra training data)
no code implementations • 26 Jun 2023 • Yueming Lyu, Yue Jiang, Ziwen He, Bo Peng, Yunfan Liu, Jing Dong
The privacy and security of face data on social media are facing unprecedented challenges as it is vulnerable to unauthorized access and identification.
no code implementations • 23 Nov 2022 • Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
no code implementations • 23 Oct 2022 • Yunfan Liu, Qi Li, Qiyao Deng, Zhenan Sun, Ming-Hsuan Yang
Facial Attribute Manipulation (FAM) aims to aesthetically modify a given face image to render desired attributes, which has received significant attention due to its broad practical applications ranging from digital entertainment to biometric forensics.
no code implementations • 15 Sep 2021 • Muyi Sun, Jian Wang, Yunfan Liu, Qi Li, Zhenan Sun
Biphasic facial age translation aims at predicting the appearance of the input face at any age.
no code implementations • 19 Nov 2020 • Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan
Generative Adversarial Networks (GANs) with style-based generators (e. g. StyleGAN) successfully enable semantic control over image synthesis, and recent studies have also revealed that interpretable image translations could be obtained by modifying the latent code.
no code implementations • 3 Jun 2020 • Qiyao Deng, Jie Cao, Yunfan Liu, Zhenhua Chai, Qi Li, Zhenan Sun
Face portrait editing has achieved great progress in recent years.
no code implementations • 15 Nov 2019 • Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan
Face aging, which aims at aesthetically rendering a given face to predict its future appearance, has received significant research attention in recent years.
no code implementations • 6 Mar 2019 • Qi Li, Yunfan Liu, Zhenan Sun
Age progression and regression refers to aesthetically render-ing a given face image to present effects of face aging and rejuvenation, respectively.
1 code implementation • 31 Jan 2019 • Caiyong Wang, Yuhao Zhu, Yunfan Liu, Ran He, Zhenan Sun
In this paper, we propose a deep multi-task learning framework, named as IrisParseNet, to exploit the inherent correlations between pupil, iris and sclera to boost up the performance of iris segmentation and localization in a unified model.
Ranked #1 on
Iris Segmentation
on CASIA
no code implementations • CVPR 2019 • Yunfan Liu, Qi Li, Zhenan Sun
Since it is difficult to collect face images of the same subject over a long range of age span, most existing face aging methods resort to unpaired datasets to learn age mappings.
no code implementations • 17 Feb 2017 • Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, Jia Deng
We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them.
General Classification
Human-Object Interaction Detection
+1
1 code implementation • 30 Nov 2016 • Hongwen Zhang, Qi Li, Zhenan Sun, Yunfan Liu
This Estimation-Correction-Tuning process perfectly combines the advantages of the global robustness of data-driven method (FCN), outlier correction capability of model-driven method (PDM) and non-parametric optimization of RLMS.