no code implementations • 22 Jul 2023 • Jiancong Feng, Yuan-Gen Wang, Fengchuang Xing
Based on this finding, we create a new network architecture by integrating depth-wise convolution with channel attention without the blur kernel estimation, resulting in a performance improvement instead.
1 code implementation • 21 Jun 2023 • Fengchuang Xing, Yuan-Gen Wang, Weixuan Tang, Guopu Zhu, Sam Kwong
Self-attention based Transformer has achieved great success in many computer vision tasks.
no code implementations • 22 Aug 2021 • Fengchuang Xing, Yuan-Gen Wang, Hanpin Wang, Leida Li, Guopu Zhu
To capture the long-range spatiotemporal dependencies of a video sequence, StarVQA encodes the space-time position information of each patch to the input of the Transformer.