no code implementations • 27 Feb 2023 • Xihao Wang, JiaMing Lei, Hai Lan, Arafat Al-Jawari, Xian Wei
The dual-equivariance of our model can extract the equivariant features at both local and global levels, respectively.
no code implementations • 18 Nov 2022 • Han Huang, Liliang Chen, Xihao Wang
The report proposes an effective solution about 3D human body reconstruction from multiple unconstrained frames for ECCV 2022 WCPA Challenge: From Face, Body and Fashion to 3D Virtual avatars I (track1: Multi-View Based 3D Human Body Reconstruction).
no code implementations • 11 Sep 2022 • Xihao Wang, Xian Wei
Continual Learning aims to learn multiple incoming new tasks continually, and to keep the performance of learned tasks at a consistent level.
no code implementations • 5 Jan 2022 • Xian Wei, Xihao Wang, Hai Lan, JiaMing Lei, Yanhui Huang, Hui Yu, Jian Yang
Self-attention shows outstanding competence in capturing long-range relationships while enhancing performance on vision tasks, such as image classification and image captioning.
1 code implementation • 10 Dec 2021 • Hai Lan, Xihao Wang, Xian Wei
With the development of the self-attention mechanism, the Transformer model has demonstrated its outstanding performance in the computer vision domain.