Search Results for author: Zhaoyang Li

Found 7 papers, 1 papers with code

Extracting knowledge from features with multilevel abstraction

no code implementations4 Dec 2021 Jinhong Lin, Zhaoyang Li

Knowledge distillation aims at transferring the knowledge from a large teacher model to a small student model with great improvements of the performance of the student model.

Data Augmentation Self-Knowledge Distillation

Information Bottleneck Disentanglement for Identity Swapping

no code implementations CVPR 2021 Gege Gao, Huaibo Huang, Chaoyou Fu, Zhaoyang Li, Ran He

In this work, we propose a novel information disentangling and swapping network, called InfoSwap, to extract the most expressive information for identity representation from a pre-trained face recognition model.

Disentanglement Face Recognition

FaceInpainter: High Fidelity Face Adaptation to Heterogeneous Domains

no code implementations CVPR 2021 Jia Li, Zhaoyang Li, Jie Cao, Xingguang Song, Ran He

In this work, we propose a novel two-stage framework named FaceInpainter to implement controllable Identity-Guided Face Inpainting (IGFI) under heterogeneous domains.

Facial Inpainting

A Neural Network for Detailed Human Depth Estimation from a Single Image

1 code implementation ICCV 2019 Sicong Tang, Feitong Tan, Kelvin Cheng, Zhaoyang Li, Siyu Zhu, Ping Tan

To achieve this goal, we separate the depth map into a smooth base shape and a residual detail shape and design a network with two branches to regress them respectively.

Depth Estimation

Action Recognition Based on Joint Trajectory Maps Using Convolutional Neural Networks

no code implementations8 Nov 2016 Pichao Wang, Zhaoyang Li, Yonghong Hou, Wanqing Li

Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition.

Action Recognition

Combining ConvNets with Hand-Crafted Features for Action Recognition Based on an HMM-SVM Classifier

no code implementations1 Feb 2016 Pichao Wang, Zhaoyang Li, Yonghong Hou, Wanqing Li

This paper proposes a new framework for RGB-D-based action recognition that takes advantages of hand-designed features from skeleton data and deeply learned features from depth maps, and exploits effectively both the local and global temporal information.

Action Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.