1 code implementation • 6 Apr 2023 • Chenyang Wang, Junjun Jiang, Zhiwei Zhong, Deming Zhai, Xianming Liu
In this paper, we build a novel parsing map guided face super-resolution network which extracts the face prior (i. e., parsing map) directly from low-resolution face image for the following utilization.
1 code implementation • 19 Feb 2023 • Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao, Xiangyang Ji
Guided depth map super-resolution (GDSR), which aims to reconstruct a high-resolution (HR) depth map from a low-resolution (LR) observation with the help of a paired HR color image, is a longstanding and fundamental problem, it has attracted considerable attention from computer vision and image processing communities.
1 code implementation • CVPR 2023 • Chenyang Wang, Junjun Jiang, Zhiwei Zhong, Xianming Liu
To circumvent this problem, Fourier transform is introduced, which can capture global facial structure information and achieve image-size receptive field.
1 code implementation • CVPR 2022 • Wenbo Zhao, Xianming Liu, Zhiwei Zhong, Junjun Jiang, Wei Gao, Ge Li, Xiangyang Ji
Most existing methods either take the end-to-end supervised learning based manner, where large amounts of pairs of sparse input and dense ground-truth are exploited as supervision information; or treat up-scaling of different scale factors as independent tasks, and have to build multiple networks to handle upsampling with varying factors.
1 code implementation • 13 Dec 2021 • Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao, Xiangyang Ji
Specifically, we propose an attentional kernel learning module to generate dual sets of filter kernels from the guidance and the target, respectively, and then adaptively combine them by modeling the pixel-wise dependency between the two images.
1 code implementation • 4 Apr 2021 • Zhiwei Zhong, Xianming Liu, Junjun Jiang, Debin Zhao, Zhiwen Chen, Xiangyang Ji
Specifically, to effectively extract and combine relevant information from LR depth and HR guidance, we propose a multi-modal attention based fusion (MMAF) strategy for hierarchical convolutional layers, including a feature enhance block to select valuable features and a feature recalibration block to unify the similarity metrics of modalities with different appearance characteristics.