no code implementations • 17 Oct 2023 • Zhaojie Chu, Kailing Guo, Xiaofen Xing, Yilin Lan, Bolun Cai, Xiangmin Xu
In this study, we propose a novel framework, CorrTalk, which effectively establishes the temporal correlation between hierarchical speech features and facial activities of different intensities across distinct regions.
no code implementations • 4 Oct 2023 • Kaijun Gong, Zhuowen Yin, Yushu Li, Kailing Guo, Xiangmin Xu
To reduce the data-dependent redundancy, we devise a dynamic shuffle module to generate data-dependent permutation matrices for shuffling.
no code implementations • 25 Sep 2023 • Pucheng Zhai, Kailing Guo, Fang Liu, Xiaofen Xing, Xiangmin Xu
Therefore the pruning strategy can gradually prune the network and automatically determine the appropriate pruning rates for each layer.
1 code implementation • 12 Apr 2022 • Kailing Guo, Zhenquan Lin, Xiaofen Xing, Fang Liu, Xiangmin Xu
In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance.
1 code implementation • 9 Oct 2021 • Zhenquan Lin, Kailing Guo, Xiaofen Xing, Xiangmin Xu
Comprehensive experiments show that WE outperforms the other reactivation methods and plug-in training methods with typical convolutional neural networks, especially lightweight networks.
no code implementations • 4 Dec 2017 • Bolun Cai, Xiangmin Xu, Kailing Guo, Kui Jia, DaCheng Tao
With the powerful down-sampling process, the co-training DSN set a new state-of-the-art performance for image super-resolution.
no code implementations • ICCV 2017 • Bolun Cai, Xianming Xu, Kailing Guo, Kui Jia, Bin Hu, DaCheng Tao
We propose a joint intrinsic-extrinsic prior model to estimate both illumination and reflectance from an observed image.
no code implementations • 15 May 2017 • Xiaoyi Jia, Xiangmin Xu, Bolun Cai, Kailing Guo
However, the previous methods mainly restore images from one single area in the low resolution (LR) input, which limits the flexibility of models to infer various scales of details for high resolution (HR) output.