1 code implementation • 20 Nov 2024 • Wenli Huang, Ye Deng, Yang Wu, Jinjun Wang
By integrating the AC-Attention module into the DSen2-CR cloud removal framework, we significantly improve the model's ability to capture essential distant information, leading to more effective cloud removal.
no code implementations • 29 Jun 2023 • Siqi Hui, Sanping Zhou, Ye Deng, Jinjun Wang
Specifically, we select the teacher model as the one with the best validation accuracy during meta-training and restrict the symmetric Kullback-Leibler (SKL) divergence between the output distribution of the linear classifier of the teacher model and that of the student model.
2 code implementations • 12 May 2023 • Ye Deng, Siqi Hui, Sanping Zhou, Deyu Meng, Jinjun Wang
And based on this attention, a network called $T$-former is designed for image inpainting.
1 code implementation • 14 Nov 2021 • Siqi Hui, Sanping Zhou, Ye Deng, Wenli Huang, Jinjun Wang
TPL and TSL are supersets of perceptual and style losses and release the auxiliary potential of standard perceptual and style losses.
1 code implementation • 19 Nov 2019 • Xiaoyu Wang, Ye Deng, Jinjun Wang
Recently the Generative Adversarial Network has become a hot topic.